Tuesday, June 1, 2010

Final 2 Spring 2010

The final results of a major international study of the potential link between cellphone use and cancer were published last week. The finding: Using a cellphone seems to protect against two types of brain tumors.

Even the researchers didn't quite believe it.

The apparent shield of cellphone radiation, most likely fictitious, illustrates how hard it is to analyze, let alone quantify, the potential for a small elevated risk in a rare disease from a widespread, mundane activity.

"They found that ever having used a cellphone appeared to be protective [against] brain cancer," says David O. Carpenter, director of the University at Albany's Institute for Health and the Environment, in Albany, N.Y. "And that just simply makes no sense."

View Full Image
NUMGUY
Reuters

A trader attached to his phone. A bumpy study shows how hard it is to gauge the impact of a widespread activity on the risk for a rare disease.
NUMGUY
NUMGUY
The Numbers Guy Blog

* The Search for Cellphone-Tumor Link

The study was funded in part by the Mobile Manufacturers' Forum and GSM Association, two wireless industry groups. The researchers had protections in place they say guarded their independence. Most criticisms of the study haven't focused on the funding.

The researchers conducting the study, which was called Interphone, were flummoxed at nearly every turn. They tried to find a control group that matched participants who had suffered a brain tumor, but potential subjects were reluctant to participate, for various reasons. Then there were subtle behavioral differences between individuals with and without brain tumors. Internal squabbling over how to interpret the results delayed publication for so long that usage patterns of study participants didn't match those of mobile users today.

The Interphone researchers acknowledged in their resulting paper, published online last week by the International Journal of Epidemiology, that something had probably gone wrong with the controls.

The study tracked cellphone use across 13 countries. It looked at a group of adults 30 to 59 years old who had been diagnosed with glioma or meningioma, types of brain tumors that can be either benign or malignant, between 2000 and 2004. They were compared with control subjects, people selected to match the individuals with tumors in terms of age, gender and place of residence.

Then both groups were interviewed extensively about their cellphone use. If the two groups matched in other ways, and the group with brain tumors used cellphones more frequently, that would suggest that cellphone use might have caused the tumors.

But they didn't really match. For one thing, just 53% of people selected to participate as controls agreed, and a survey of those who declined showed that they were less likely to use cellphones than those who participated. That may have artificially raised cellphone use in the tumor-free control group and made mobile phones seem less dangerous than they are.

The result is a strange set of numbers. Many levels of cellphone use appeared to reduce the chance of developing a tumor. Only the people who talked on cellphones the most had a significantly greater chance of developing glioma—40% greater—than those who didn't use cellphones.

Yet, as some of the study's authors themselves pointed out, if those who never used cellphones—who were more prevalent among those with tumors—were excluded, and the lightest users were contrasted with the more avid ones, then the bizarre protective effect of cellphone use mostly disappeared, and the risk among the heaviest users was 82% greater.
[NUMBGUY_cell]

Even in this analysis, the risk doesn't steadily increase with use, which is what epidemiologists typically look for—a discernible dose-response relationship. "It's certainly less compelling than if you saw some kind of graded response," says David A. Savitz, director of the Disease Prevention and Public Health Institute at the Mount Sinai School of Medicine in New York.

Disputes about how to interpret these numbers held up publication of the research, says Christopher Wild, director of the World Health Organization's International Agency for Research on Cancer, in Lyon, which coordinated the study. The study was published more than six years after its conclusion, by which time cellphone use had both surged and changed.

"Interphone made more effort than most other studies to identify and quantify its own flaws," says co-author Martine Vrijheid, a researcher at the agency. "It has thereby also attracted more attention to these flaws."

A U.K. study under way will take a different approach, tracking cellphone users over time to see if heavy use is tied to a greater incidence of cancer. But the study will still need to enlist hundreds of thousands of volunteers to yield useful results, and it could take decades to spot any divergence in cancer rates.

Epidemiologists say such research may be difficult and expensive but is important.

"Even if you think it's very, very unlikely that it's a problem," Dr. Savitz says, "it's always worth some effort to make sure you haven't done something really terrible" as a society by enabling widespread cellphone use. Such open questions, and the difficulty of solving them, he says, "keep epidemiologists in business for a long time."

Tuesday, May 18, 2010

First Half Final Spring 2010

Linda Freedman, LCSW, LMFT, PhD

Final Exam, May 18, 2010

The author of Freakonomics looks critically at previous research and then presents questions for new research.

Choose any of the chapters discussed in Freakonomics, reread it, and address the questions below.

(1) What is the research question in the chapter?

(2-3-4) Name are three variables of study?

(5-6-7) How is each measured (operationalized)?

(8) What was the theory driving the research?

(9) What did the researcher expect to find, based upon the theory?

(10) State what the researcher expected to find in the form of a hypothesis.

(11) What were the ethical issues in the study, if there were any?

The following is a problem that begs research. Your job is design a qualitative study for inquiry.

Juvenile delinquency and adolescent fatherhood are highly correlated, and both predict future continued involvement in the criminal justice system.

Approximately one-quarter of teens who break the law (males) become fathers before they are twenty years old, as opposed to 4-7% of the general teenage male populace.


Because the offenders who are also dads are incarcerated, this poses economic and social problems to the mothers of these children, and puts children at risk.

A lit review tells us that quantitative data indicates that young father offenders also repeat crimes at a higher rate than offenders who are not fathers. But qualitative studies indicate that being a father is a reason these young dads proffer as an incentive not to repeat crimes.

It would seem that there need be more inquiry into the beliefs of incarcerated teenage fathers about their roles as fathers, and their transition into paternal roles in light of their criminal involvement.

The literature also suggests that (a) offenders who form strong bonds in the workplace and have marital partners tend to desist from crime at a higher rate than offenders who do not form such bonds, (b) being in jail interrupts that bonding process, and this often has to do with problems between the child's mother and the offending teenage father, and (c) mothers often discouraged a relationship between the offending father and his child.


Now.


(12-13-14) Pose three questions for the subjects of your study that you feel will inform professionals about this issue.

(15-16) Assume you have access to 29 incarcerated young men, 9 of whom are fathers, 7 of whom have files that already contain data about their attitudes and experiences. Some might choose only to use these 7. Why? What kind of methodology is that?

(17-18-19) How will you collect date if you use the incarcerated men?


(20-21-22) What kinds of themes (name five) do you guess will be coded when you begin to analyze your data?


(23) What kinds of ethical issues do you anticipate?


A random study of NASW members explored violence in the workplace. Three thousand members were queried to participate, but only 1029 returned their surveys.

The researcher used a modified version of the Revised Conflict Tactics Scale (CTS2) to measure physical and psychological assault. Questionnaires asked the respondents which of the specific acts or events they had ever experienced, had experienced in the past year, and how many times they had experienced each act in the past year, a total of 20 items on the scale. Respondents were asked about being both a victim and a perpetrator.

(24) What do you think of that return rate?

(25) Name one internal validity issue that strikes you as a problem using this instrument, the CTS2.

(26) How do you imagine the researcher checked for reliability of the CTS2?

Before querying social workers, the researcher did a literature review and found that studies indicated between 42%-62% of the social workers studied had been verbally abused; 17%-25% were physically threatened, and 3-23% had been physically assaulted, depending upon the study.

The results of this study, however, found an overwhelming number of social workers had experienced violence in the workplace (most were in private practice, by the way)-- 86%. And one quarter were perpetrators in some way or another!


(27-28-29) Why do you think there seems to be a jump in the numbers? Do you feel there might be a reliability or validity issue? If so, what kind?

(30-31) What kind of a study was this? Which research design, what type?

(32-33) What was the risk/benefit, (IRB consideration) you would have to tell your subjects about before you started your inquiry.


Good Luck!

Tuesday, March 16, 2010

Breakdown of Reading Assignments

3/16 pp. 201-304

(ch 10) Constructing Measurement Instruments, causal inference and correlational designs, threats to internal validity, external validity

(ch 11) Experimental designs

(ch 12) Single-case evaluation designs

3/23 pp. 305-414

(ch 13) Program evaluation

(ch 14) Sampling

(ch 15) Survey research

April 17 at the Institute: pp. 390-476

(ch 16) Analyzing available records

(ch 17) qualitative research: General Principals

(ch18) Qualitative research: Specific Methods

(ch 19) Qualitative Data Analysis


5/4 pp. 477-523


(ch 20) Quantitative Data Analysis

ALSO: Read Freakonomics

5/18 pp. 524-632

(ch 21) Inferential Data Analysis Part 1

(ch 22) Inferential Data Analysis Part 2

(ch 23) Writing research proposals

Appendix Using the Library

Statistics for estimating sampling error

Proportion under Normal Curve exceeded by Effect-Size Values

Learner’s guide to SPSS

Tuesday, March 2, 2010

Syllabus for Spring 2010

RM 512 Research Process, Distance Learning
Fall 2009
Linda Freedman, LCSW, LMFT, PhD
Office: 773-271-7111

Distance Learning students are scheduled for two sessions on-site at ICSW.

Our usual time will be Tuesdays, 8:00-10:00 pm. Due to the Jewish holiday of Passover, however, our March 30 class will be on March 23. I switched with Dr. Sarasohn.

At the Institute: Feb 14, April 18.

ONLINE: Mar 2
, 16, 23, May 4,18 June 1


Readings in Rubin and Babbie
3-16-10: pps. 201-304
3-23-10 pps. 305-414
5-04-10 pps. 415-523 AND read Freakonomics (not the latest edition)
5-18-10 pps. 524-632

Class quizzes will be on March 16 and May 4. You will be able to collaborate on these quizzes.

You'll have a take-home final on May 18, due May 25. Part II of the final will be in-class final on June 1. You won't be able to collaborate on Part II, and shouldn't collaborate on the take-home, either. But that's obviously going to be up to you. You're on your honor.

The final will cover both semesters, so review the first semester's notes to prepare.

Attendance:

This course is taught virtually in a lecture/discussion format on the web.

Be prepared to participate, please because class participation will make up 20% of your grade.

For students who miss more than one class session, except in a documented personal emergency, the overall course grade will be lowered one level. Students who miss more than two class sessions will automatically fail the course. In cases of personal emergency, the student will be asked to withdraw from the course and retake it the following academic year.

Required Texts & Readings

American Psychological Association (2009). Publication manual of the American Psychological Association ( 6th Ed.). Washington D.C.: Author.

Rubin, A. & Babbie, E. Research methods for social work (2nd or 6th
Eds.). Pacific Grove, CA: Brooks/Cole Publishing Co.

Levitt, S. D., Dubner, S. J., Freakonomics: a rogue economist explores the hidden side of everything Rev. ed. paperback.

Recommended:

Strauss, A. & Corbin, J. (1990 or later edition). Basics of qualitative research: Grounded theory procedures and techniques. Newbury Park, CA: Sage.

Locke, L.F., Spirduso, W. W., & Silverman, S. J. (1993). Proposals that work. Newbury Park, CA: Sage.

Institute for Clinical Social Work web page – http://www.icsw.edu

I'll email you additional readings.

Please note: Required readings and assignments will be posted on the class blog, along with an updated syllabus. It is your responsibility to check often for revisions. You can always email me if you aren’t sure of a class meeting or time. Here’s the URL to the blog. http://icswrp.blogspot.com

Course Description

The purpose of this course is to continue to provide incoming PhD students the opportunity to familiarize themselves and become comfortable with the research process, particularly the doctoral research process.

The underlying aim is to assist students as they design mock-up research proposals. In class, and on the final, students will:

(1) explore problems important to the field of social work

(2) pose research questions about such problems

(3) suggest hypotheses based upon theory

(4) define and operationalize variables

(5) define measurement strategies

(6) propose data collection methodology

(7) know how to present and disseminate research findings.


This course will not provide students with expertise in any one-research area, but will provide a good foundation for further study and education. The hope is to promote flexibility in future research endeavors.

Learning will be a collaborative effort, drawing upon the experiences and expertise of all class members—. 40% of your class grade is based upon weekly quizzes that you will work on together in class time. If you miss the class, you lose these points. I’m sorry.

Other Learning Objectives:

Upon completion of the course, students should be able to:

• Promote critical analytic skills for developing, implementing, and critiquing research
problems and questions appropriate to all levels of practice, including practice at work sites.

• Select appropriate quantitative and qualitative approaches to guide research on a particular topic, including the use of available data, experimental and quasi-experimental designs, surveys, intensive interviewing, and participant observation.

• Implement procedures for assuring the ethical conduct of research, including the necessity of obtaining informed consent; inclusion of safeguards to insure the confidentiality of research data; assurance of voluntary participation in research; and an appreciation for not using vulnerable populations as research subjects just because they may be more available.

• Use current technology, including the Internet and a variety of existing social science and social work databases for understanding specific human conditions and biopsychosocial interventions.

• Design studies that contribute to knowledge about social work clients, practice, and policy.

• Critique existing research in terms of its ability to rule out other possible explanations for findings.

• Critique existing research in terms of its relevance and generalizability, particularly to women, racial, ethnic, other minority groups, and people from different socioeconomic classes.

• Evaluate research according to principles of social justice, cultural competence, and utility.

• Develop procedures for coping with organizational and sociopolitical issues in agency-based research concerning such issues as how research projects get framed to how data access can be affected.

Course Expectations:

Students are expected to complete assigned readings in advance of class meetings.

We won’t necessarily discuss the readings in class, but you are expected to know the material. For example, you will read the entire APA style citation manual, and you should be able to use it, but we’re not spending time in class on it because it is self-explanatory. But you may be asked to use it on the final.

All students will be held accountable for adhering to academic and nonacademic standards of conduct as described in the ICSW Student Handbook, available on the ICSW website.

Accommodations for Students with Disabilities:

Accommodations will be made for students with disabilities. Students needing accommodations for any type of disability must do the following:

1. Go to the ICSW Office of Disability Services to obtain confidential verification of the disability and a statement of accommodations recommended by that office.

2. Show the ICSW Office of Disability Services accommodation letter to the instructor of the class for which the student requests accommodation.

3. Show the accommodation letter to the instructor at the beginning of the course or before the start of the course.

Questions and Concerns:

I am willing to discuss problems about course work during the week. Please do not try to contract me on Friday nights or Saturdays (the Jewish Sabbath) or Jewish holidays, even if you have what you think is an emergency about a grade. I will understand and make exceptions for you about assignments if you email. Do not hesitate to ask for clarification about anything having to do with this class.

Assignments, Tests, & Grading

The only assignments are your readings.

Class participation 20% of the grade

Class quizzes, 40%

Finals at the end of each semester, 40%



Monday, December 21, 2009

The Research Process

STAGES OF THE RESEARCH PROCESS

Linda Freedman
ICSW, Chicago, IL
Distance Learning Program

I. Selecting an area of interest

a. Look around

b. Listen to others

c. Put a problem into words

II Formulate a research question and hypothesis



a. Focus on alternatives

b. Think, what if we changed something

c. Think of many different somethings

d. Refine those and create an If/Then

III. Formulate the beginning of a research design



a. Identify the variables that you're studying

b. Decide you they can be measured (operationalize)

c. Find out how others have measured them

d. Find out if the measures you're choosing have been shown
to be reliable and valid

IV Work up a sampling plan

a. Consider how you'll find your research subjects

b. Think about how you'll be sure that they don't feel coerced or
otherwise compelled (social desirability bias) to participate

c. Review the IRB protection for research subject protocol requirements

d. List the potential harms to subjects that your study might bring

e. Reason out how you will protect subjects from these problems

f. Think about the cost-benefits of the study. Is your study worth it?

V. Work up a data-collection methodology

a. Consider data-collection methods beyond the instruments you're using.

b. How will you ensure that subjects are treated ethically?

c. Are they treated with cultural sensitivity?


VI. Collect the Data

VII. Test the hypotheses, Analyze the data

VIII. Put it into words, write down the findings

IX Publish or otherwise disseminate the findings



a. The subjects

b. The scientific community


Thursday, December 10, 2009

Strenth and Direction

The real point of the last lecture is that these are hugely important concepts, and if you get these, you'll be able to understand the research synopses that you read in any journal (if you read between the lines, basically, and look for them).

Strength is the degree to which one variable is related to another. We used the example of senility and age. The higher the age, the more the senility is the likely outcome of a bivariate study on this topic.

The hypothesis there could be,

If a person is in his 90's, he is likely to suffer more senile dementia than if a person is in his 40's.

OR If people reach retirement and beyond, then the degree of senile dementia will be proportional to their age.

OR People over the age of 65 will have more senile dementia than people 65 years old and younger.

This study, with only 2 variables, is called a bivariate study. I think I mistakenly called it binary in class.

Now if you were to tell me, but maybe it isn't aging that is responsible for dementia, but poverty, because people get poorer as they age, then you're thinking that there's an extraneous variable involved that you had better control for so you don't mistakenly say that it's all about age.

Now you have a multivariate study. You have 2 independent variables and one dependent variable. You measure all three. Then you check to see which variables are related to one another, in other words, when the measurement of one of them goes up, then the other goes up (or down if the relationship is negative, meaning inverse).

That going up when one goes up, or down when the other goes up, is the DIRECTION of the relationship. How much one goes up when the other goes up, or down when the other goes up, is the STRENGTH. We measure this with one statistic, usually the Pearson r.

And when the computer spits out this statistic, there's a plus sign or a minus sign that tells us the direction. It's really not a plus sign, there's just no sign for a positive relationship, but there is a minus sign to indicate a negative relationship. And the r will only be between -1 and +1. i.e., -.72, or .98, or -.01

-.01 is a very tiny relationship, hardly any at all. If the strength of the relationship between senility and age were -.01 then we would know, intuitively, that the two variables are not related.

And it could be that the relationship between aging and poverty turned out to be significant, too, maybe .62. That would mean that neither of the two independent variables predicted the dependent variable.

NOW. You would also check to see if the two independent variables were related, poverty and aging. And there's yet another test to see if each of them contributes something different to the dependent variable (assuming they're both significantly related to the dependent variable).

We'll get to that later. Enough for now.

Tuesday, December 8, 2009

About variables

Review for today’s quiz (December 8, 2009)

We'll go over this material, then you'll take a quiz, then we'll get to the new material in chaps 8 and 9

What’s an independent variable?

What’s a dependent variable?

What’s an extraneous variable?

How do you control for an extraneous variable to see if it’s really what might be the reason a particular variable is significant?

What’s a spurious relationship?

What’s a mediating variable?

What’s a moderating variable?

What do strength and direction mean?

Start with a direction, name one, give an example

Now strength, example

What’s another name for a negative relationship?

Tuesday, November 17, 2009

Revised Syllabus

Dec 8, read chaps 8-9 and first Chapter of Freakonomics
Dec 19, read chaps 10-11 and second Chapter of "
Jan 12 Review for final
Jan 14-15 I'll email you the final
Jan 20 Final is due by email

Jan 26, You'll have your grade in class and we'll discuss the final

Monday, November 9, 2009

Conflict of Interest

ETHICS in SOCIAL WORK RESEARCH

http://www.socialworkers.org/pubs/code/default.asp The NASW Code of Ethics
Click on English or Spanish.

Read through, then scroll down to

1.06 Conflicts of Interest
(a) Social workers should be alert to and avoid conflicts of interest that interfere with the exercise of professional discretion and impartial judgment. Social workers should inform clients when a real or potential conflict of interest arises and take reasonable steps to resolve the issue in a manner that makes the clients’ interests primary and protects clients’ interests to the greatest extent possible. In some cases, protecting clients’ interests may require termination of the professional relationship with proper referral of the client.

(b) Social workers should not take unfair advantage of any professional relationship or exploit others to further their personal, religious, political, or business interests.

(c) Social workers should not engage in dual or multiple relationships with clients or former clients in which there is a risk of exploitation or potential harm to the client. In instances when dual or multiple relationships are unavoidable, social workers should take steps to protect clients and are responsible for setting clear, appropriate, and culturally sensitive boundaries. (Dual or multiple relationships occur when social workers relate to clients in more than one relationship, whether professional, social, or business. Dual or multiple relationships can occur simultaneously or consecutively.)

(d) When social workers provide services to two or more people who have a relationship with each other (for example, couples, family members), social workers should clarify with all parties which individuals will be considered clients and the nature of social workers’ professional obligations to the various individuals who are receiving services. Social workers who anticipate a conflict of interest among the individuals receiving services or who anticipate having to perform in potentially conflicting roles (for example, when a social worker is asked to testify in a child custody dispute or divorce proceedings involving clients) should clarify their role with the parties involved and take appropriate action to minimize any conflict of interest.

Do you see any potential problems with research and ethics that we have to be aware of?

More on NASW Code of Ethics SEC 5.02

5.02 Evaluation and Research
(a) Social workers should monitor and evaluate policies, the implementation of programs, and practice interventions.

(b) Social workers should promote and facilitate evaluation and research to contribute to the development of knowledge.

(c) Social workers should critically examine and keep current with emerging knowledge relevant to social work and fully use evaluation and research evidence in their professional practice.

(d) Social workers engaged in evaluation or research should carefully consider possible consequences and should follow guidelines developed for the protection of evaluation and research participants. Appropriate institutional review boards should be consulted.

(e) Social workers engaged in evaluation or research should obtain voluntary and written informed consent from participants, when appropriate, without any implied or actual deprivation or penalty for refusal to participate; without undue inducement to participate; and with due regard for participants’ well¬being, privacy, and dignity. Informed consent should include information about the nature, extent, and duration of the participation requested and disclosure of the risks and benefits of participation in the research.

(f) When evaluation or research participants are incapable of giving informed consent, social workers should provide an appropriate explanation to the participants, obtain the participants’ assent to the extent they are able, and obtain written consent from an appropriate proxy.

(g) Social workers should never design or conduct evaluation or research that does not use consent procedures, such as certain forms of naturalistic observation and archival research, unless rigorous and responsible review of the research has found it to be justified because of its prospective scientific, educational, or applied value and unless equally effective alternative procedures that do not involve waiver of consent are not feasible.

(h) Social workers should inform participants of their right to withdraw from evaluation and research at any time without penalty.

(i) Social workers should take appropriate steps to ensure that participants in evaluation and research have access to appropriate supportive services.

(j) Social workers engaged in evaluation or research should protect participants from unwarranted physical or mental distress, harm, danger, or deprivation.

(k) Social workers engaged in the evaluation of services should discuss collected information only for professional purposes and only with people professionally concerned with this information.

(l) Social workers engaged in evaluation or research should ensure the anonymity or confidentiality of participants and of the data obtained from them. Social workers should inform participants of any limits of confidentiality, the measures that will be taken to ensure confidentiality, and when any records containing research data will be destroyed.

(m) Social workers who report evaluation and research results should protect participants’ confidentiality by omitting identifying information unless proper consent has been obtained authorizing disclosure.

(n) Social workers should report evaluation and research findings accurately. They should not fabricate or falsify results and should take steps to correct any errors later found in published data using standard publication methods.

(o) Social workers engaged in evaluation or research should be alert to and avoid conflicts of interest and dual relationships with participants, should inform participants when a real or potential conflict of interest arises, and should take steps to resolve the issue in a manner that makes participants’ interests primary.

(p) Social workers should educate themselves, their students, and their colleagues about responsible research practices.

IRB

ETHICS in SOCIAL WORK RESEARCH-3

NOW GO TO THE ICSW IRB PAGE and download the manual.
(ICSW.edu- Academic Resources-IRB)

3.0, types of submissions

Each request for IRB review will address all of the areas outlined on the IRB Application Form (www.icsw.edu). In addition to the credentials of the project personnel, this will include statements of the:

1. Specific goals and objectives of the project.
2. Significance and context of the proposed work.
3. Need for and value of the project in relation to prior work in the field.
4. Study design.
5. Selection and recruitment of subjects, including informed consent procedures.
6. Risks and benefits of the research.
7. Data collection methodology and confidentiality of data.

In addition, the primary investigator and faculty chair or faculty sponsor will give signed assurance that the application and the research described therein will be conducted in accordance with the legal, ethical and professional standards of practice as outlined.

Appendices will include copies of any interview schedule, all tests, questionnaires, inventories, consent/assent forms, and letters to participants as well as materials used to solicit participants.


3.1.2 Dissertations

All doctoral students intending to do research involving use of human subjects must submit an application for IRB review as outlined for faculty and staff in 3.1.1 1. No pilot data collection can begin nor can the proposal defense hearing be scheduled until the IRB approval is completed.

1) Research proposals should be submitted first to the student’s Dissertation Committee for review of the academic merit and ethical issues of the proposal. Upon completion of this initial informal review of the research proposal by the Committee, the student completes an IRB Application Form (see Section 5.1) and the Chair reviews and signs it.

2) Application for IRB review must be submitted to the IRB before the proposal defense is scheduled.

3) Documentation of each aspect of the research process must be included with the application:
(a) If applicable, include copies of any materials used in the recruitment of subjects.
(b) Informed consent protocol and forms must include:
(i) Written consent/assent forms for each participant
(ii) Procedures for reading forms or describing contents or providing participants with adequate time to read and question contents of form
(iii) Assurances should be provided that participants are provided copy of the signed form.

(c) Include copies of all data collection tools such as interview schedules, tests or other forms to be completed by subjects or others related to the research.
(d) Original signed forms must be kept on file by researcher.

4) Annual Report forms (see Section 5.3) should be submitted to IRB no less than once a year. The researcher will receive an Annual Report Form approximately 11 months after initial approval was received

5) Submission of injury reports and reports of unanticipated problems involving risks must be made to the Chair of the IRB or other designated responsible person as they occur in the most expeditious manner possible. Any delay in reporting or failure to report injury or unexpected negative consequences may result in the student being removed from the program and/or the project being stopped immediately.

6) A prompt report should be made whenever there is change in the research protocol.

7) The researcher and/or the Dissertation Committee Chair are independently responsible for reporting to the IRB by phone, if urgent or by email and/or other written report any noncompliance with agreed upon process for research.

Students should submit one electronic copy or two paper copies of the appropriate IRB Application Form (See 5.1) or the Request for IRB Review of Course-Related Research (See 5.2), following the same format as that for faculty and staff as outlined in section 3.1.1.1. Each area of concern must be addressed:

1. Specific goals and objectives of the project
2. Significance and context of the proposed work
3. Need for and value of the project in relation to prior work in the field
4. Study design
5. Selection and recruitment of subjects, including informed consent procedures
6. Risks and benefits of the research
7. Data collection methodology and confidentiality of data

Appendices will include copies of any interview schedule, all tests, questionnaires, inventories, consent/assent forms, and letters to participants as well as materials used to solicit participants.

See detailed explanations in the Proposal Outline Guidelines, Section 3.3.1, and/or the Informed Consent Guidelines, in Section 3.3.2.

IRB FAQ

PLEASE FIND IT AT THE ICSW WEBSITE AND READ IT CAREFULLY

IRB APPLICATION FORM

IRB Application Form
INSTITUTE FOR CLINICAL SOCIAL WORK

Incomplete application packets or applications with omitted may result in review delays.

Checklist of Supporting Material for Investigators:

[ ] Resume or CV of principal investigator or faculty sponsor
[ ] Flyers, advertisements, oral scripts (including telephone scripts), other recruitment materials
[ ] Consent/assent forms
[ ] Surveys, questionnaires, interview questions/guides
[ ] Debriefing information, if applicable
[ ] Letters of collaboration, if applicable
[ ] Funding Proposals, if applicable. For federally funded research include all sections of the proposal or grant except the budget pages.


Dissertation or Faculty Research

[For completing this application, use only Courier or Times New Roman font, 11+ pt.]

Step I: Project Personnel

Project Title:

Principal Investigator: (attach resume or CV)

Address:

Phone: E-mail:


List all co-investigators and/or faculty sponsors below, including those from other institutions.

NOTE: If this is dissertation research, list dissertation chair.

Name, Degree, Address, Phone and email or other contact information:

1.

2.

3.


Contact Information

Who should be contacted for further information about this application?

Name: Position on the project:

Phone: E mail:



Step II: Funding Sources & Performance Sites

[If this project is funded by an outside source, complete Step II]

Check all of the appropriate boxes for funding sources for this research. Include pending funding source(s).

[ ] Federal – If federally funded, provide name and address of individual to whom certification of IRB should be sent:

[ ] Extramural (non-federal funding sources) – Provide name and address of individual to whom certification of IRB should be sent:

Principal Investigator of Grant or Contract:

Name of Funding Source:

Grant or Contract Number (If available):

Grant, Contract, or Project Title:


Performance Sites:

List all collaborating sites:

Provide letters of cooperation or support: [ ] Attached [ ] Pending [ ] Not Applicable

Provide letter of IRB approval from other site: [ ] Attached [ ] Pending [ ] Not Applicable



Step III: Specific Aims, Goals, and Objectives of the Project (no more than 200 words)

Summarize the specific aims, goals, and objectives of the proposed research using non-technical language that can be understood by any generally informed layperson.


Step IVa: Significance and Context of the Work, Including Any New Information to Be Obtained (no more than one page)

Using non-technical language that can be understood by any generally informed layperson, describe the significance and context of the work, including new information the Principal Investigator intends to obtain. In the case of classroom instructional activities, what types of skills or knowledge is the research intended to provide for the students?


Step IVb: Need for and value of the Project in Relation to Prior Work in the Field (one page)

Demonstrate that the goals and objectives described are worthy of investigation and require the use of human research participants.


Step IVc: Study Design (one page)

Include a detailed description of the specific study design to be used demonstrating a logically derived connection between the design, the significance of the project and the need for the project. NOTE: Sample selection, protection of participants, and data collection methodology are covered below.


Step V: Participant Population

Please indicate the total number of participants anticipated for inclusion in this project. This number should be the number of participants you will enroll in order to get the adequate data sets you will need. If multiple sites are to be used, provide an estimate of the number in each category to be recruited from each site. In addition, if you plan to study only one gender, provide detailed rationale in the inclusion/exclusion section (Step VI, C, 1and 2) of this application.

A. Number of Participants Required:
Male:
Female:
Total:

B. Age range (check all that apply):
[ ] 0 7 yrs. (submit parental permission form – template C)
[ ] 8 17 yrs. (submit child’s assent form – template D and parental permission form – template C)
[ ] 18 64 yrs. (submit informed consent form – templates B)
[ ] 65+ yrs. (submit informed consent form – template B)

C. Source/Type of Participants (check all that apply): [ICSW students and faculty may not be subjects]
[ ] medical patients
[ ] volunteers from the general population
[ ] community institutions, please specify:
[ ] other, please specify:

D. Participant location during research data collection (check all that apply):
[ ] participant’s home
[ ] hospital or clinic, please specify:
[ ] community location, please specify:
[ ] elementary or secondary school, please specify:
[ ] other, please specify:

E. Special populations to be included in the research (check all that apply):
[ ] minors under age 18
[ ] pregnant women
[ ] prisoners
[ ] economically disadvantaged
[ ] developmentally delayed
[ ] severe and/or chronically mentally ill
[ ] other, please specify:

F. Several groups listed in (E) above are considered “vulnerable” or require special consideration by the federal regulatory agencies and by the IRB. Provide the rationale for using special populations.



Step VI: Recruitment Procedures (no more than one page)

A. Describe how participants will be identified and recruited. Attach all recruitment information, e.g., advertisements, bulletin board notices, and recruitment letters for all types of media (printed, radio, electronic, TV, or Internet).

B. Initial Contact: Describe who will make initial contact and how. If participants are chosen from records, indicate who gave approval for the use of the records. If records are “private” medical or student records, provide the release forms, consent forms, letters, and HIPAA if appropriate for securing consent of the participants for the records. Written documentation for cooperation/permission from the holder or custodian of the records should be attached. (Initial contact of participants identified through a records search must be made by the official holder of the record, such as, primary physician, therapist, school official.)

C. List criteria for inclusion and exclusion of participants in this study. Describe populations to be excluded from the research. Please describe procedures to assure equitable selection of participants. Researchers should not select participants on the basis of discriminatory criteria. Selection criteria that exclude a sex, racial, or ethnic group require a clear scientific rationale for the exclusion. By whom will the inclusion/exclusion criteria be determined in the selection of the subjects? (For example, principal investigator, research assistant, school official.)

D. Will participants receive financial or other compensation before or after the study? If yes, explain. (NOTE: this information must be outlined in the consent document.)



Step VII: Informed Consent Process

Simply giving a consent form to a participant does not constitute informed consent. The following questions pertain to the process. Researchers are cautioned that consent forms should be written in simple declarative sentences. Forms should be jargon free (see consent form templates). Foreign language versions should be prepared and included for all materials that the participants needing translations will encounter in the research.

A. Capacity to consent: Will all adult participants have the capacity to give informed consent?
[ ] Yes [ ] No

If No: describe the likely range of impairment and explain how, and by whom, their capacity to consent will be determined. NOTE: in research involving more than minimal risk, capacity to consent should be determined by a psychiatrist, psychologist, or other qualified professional not otherwise involved in the research. Individuals who lack the capacity to consent may participate in research only if a legally authorized representative gives consent on their behalf.

B. Describe what will actually be said to the participants to explain the research. (NOTE: do not say “see consent form.”) Write the explanation in lay language. If you are using telephone surveys, telephone scripts are required. If you will include participants not fluent in English, please provide an appropriate translation.

C. How will participants’ understanding be assessed? What questions will be asked to assess the participants’ understanding? If you will include participants not fluent in English, please provide an appropriate translation.

NOTE: the purpose of this question is to have you describe how you will assess participants’ understanding of the consent process. Questions requiring “yes or no” answers are not appropriate. Please ask participants to explain the purpose of the study to you along with the risks and benefits to themselves as participants. Their answers to these questions should allow you to determine whether they understand the study and their part in it. It they do not understand, informed consent has not been achieved irrespective of whether the participant signed the consent document.

D. In relation to the actual data gathering, when and where will consent be discussed and documentation obtained, for example several days before? Be specific.

E. Will the Investigator(s) be securing all of the Informed consents? [ ] Yes [ ] No [ ] N/A

If no, name the specific individuals who will obtain informed consent and include their job title and a brief description of your plans to train these individuals to obtain consent and answer participants’ questions:

F. Consent and Assent Forms (see templates C & D)

Prepare and attach the appropriate consent/assent form(s) for IRB review. If you intend to use participants that are not fluent or literate in English, have appropriate translations and back translations been developed and attached to this application? [ ] Yes [ ] No



Step VIII: Risks and Benefits of the Research

A. Does the research involve (check all that apply):
[ ] use of private records (medical, mental health or educational records)
[ ] possible invasion of privacy of participant or family
[ ] the collection of personal or sensitive information in surveys or interviews
[ ] use of a deceptive technique (If use of deception is part of the protocol, the protocol must include a “debriefing procedure” [provide this procedure for IRB review] which will be followed upon completion of the study, or withdrawal of the participants.)
[ ] presentation of materials that participants might consider offensive, threatening, or degrading
[ ] other risks, specify:

NOTE: Respond to VII B-E in written form, not more than 2 pages.

B. Identify the risks (current and potential) and describe the expected frequency, degree of severity, and potential reversibility. Include any potential late effects. (NOTE: Risks can be psychological, physical, social, economic, or legal.)

C. Describe the precautions taken to minimize risk.

D. Why are the risks mentioned above reasonable? What is the expected scholarly yield from the project? Justify the risks in relation to the anticipated benefits to the participants and in relation to the importance of the knowledge that may reasonably be expected to result from the research.

E. Benefits of participation: List any anticipated direct benefits of participation in this research project. If none, state that fact here and in the consent form. The knowledge gained from the study could produce a benefit to society. Payment or course credit is not considered to be a benefit of participation. Any benefits of the specific research procedures should be listed as potential benefits.




Step IX: Data Collection Methodology

Describe the tasks, tests, or other procedures, including the interview process, participants will be asked to complete. (Suggestion: explain step by step what the participants will be asked to do and distinguish those which are experimental from those which comprise routine tasks encountered in everyday life). Specify exactly how these tasks, tests, or procedures will generate the data that will permit achievement of the goals and objectives of this research.




Step X: Confidentiality of Data

A. Describe provisions made to maintain confidentiality of data.

1. Who will have access to raw data?

2. Will raw data be made available to anyone other than the Principal Investigator and immediate study personnel (e.g., school officials, medical personnel)? [ ] Yes [ ] No

If yes, who, how, and why?


3. If applicable, describe the procedure for sharing data.

4. If applicable, describe how the participant will be informed that the data may be shared.

5. If data are collected, stored, or analyzed on computers, describe the actual security measures used to ensure confidentiality.


B. Where will the data be kept? (Data must be secured for five years after graduation.) How will data stored on audio and videotapes be disposed of? (Disposition of audio and video tapes should be included in consent form.)



C. Will data identifying the participants be made available to anyone other than the principal investigator, that is, study sponsor, Institutional Review Board? [ ] Yes [ ] No
If yes: Who:


D. Will the research data and information be part of any permanent record? (Explanation must be in the consent form.) [ ] Yes [ ] No


E. If participants are students, will school officials receive the data with identifiers attached? (Explain here and in the consent form using appropriate language.) [ ] Yes [ ] No



Investigator’s Assurance

I certify that the information provided in this application is complete and correct.
I understand that as Principal Investigator, I have ultimate responsibility for the protection of the rights and welfare of human participants, conduct of the study and the ethical performance of the project. I agree to comply with all IRB policies and procedures, as well as with all applicable federal, state and local laws regarding the protection of human participants in research, including, but not limited to, the following:

• The project will be performed by qualified personnel according to the ICSW IRB certified protocol.
• No changes will be made in the protocol or consent form until approved by the ICSW IRB.
• Legally effective informed consent will be obtained from human participants if applicable.
• Adverse events will be reported to the ICSW IRB in a timely manner.

I further certify that the proposed research is not currently underway (except for those protocols of research previously approved and currently seeking renewal) and will not begin until approval has been obtained.


Principal Investigator’s Signature: Date: _



Dissertation Chair or Faculty Assurance for Student or Guest Investigators

By my signature as Chair or Sponsor on this research application, I certify that the student or guest investigator is knowledgeable about the regulations and policies governing research with human participants and has sufficient training and experience to conduct this particular study in accord with the approved protocol. In addition,

• I agree to meet with the investigator on a regular basis to monitor study progress.
• Should problems arise during the course of the study, I agree to be available, personally, to supervise the investigator in solving them.
• I ensure that the investigator will promptly report significant or untoward adverse effects to the ICSW IRB in a timely manner.

If I will be unavailable, as when on sabbatical leave or vacation, I will arrange for an alternate faculty sponsor to assume responsibility during my absence and I will advise the ICSW IRB by letter of such arrangements. I further certify that the proposed research is not currently underway and will not begin until approval is obtained.


Faculty Sponsor’s Signature: _____________ Date:_____


NOTE: The faculty sponsor must be a member of the ICSW faculty. The faculty member is the responsible party for legal and ethical performance of the project.

Submit this application to the office in either hard copy or as attachment to an email.

Send questions to: Dan Rosenfeld, Chair, ICSW IRB: DanR61@aol.com

Monday, November 2, 2009

SECOND CLASS 11-3-09

We'll be discussing Rubin and Babbie, chaps 4-6, with much focus on ethics, primarily the IRB, what is it, why should you care, and why you better pay attention to this class.

Click on this link to find the NASW Code of Ethics. Please read through it. Pay attention to 1.06, Conflicts of Interest and 5.02, Evaluation and Research.

Then go to the ICSW website, ICSW.edu. Click on Academic Resources, then IRB.

Download the manual.

READ IT!

Pay close attention to 3.0 and 3.12.

See you in class.

Wednesday, October 7, 2009

Who's in This Class?

Hi all,

Teaching like this, from a distance, has its pluses and minuses.

One of the minuses is that I have only heard from four of you by email.

You're supposed to send me an introductory paragraph about yourself (name, where you work, stuff like that, whatever you want me to know about you)

freedman.linda@yahoo.com

Thanks.

So far the votes are in favor of starting at 8:00. But again, not everyone has chimed in! So please, let me know if you prefer 8 to 7:30 for class time.


Linda

Tuesday, October 6, 2009

Homework for Oct 20

Hi again,

I hope you were able to follow along on Sunday. Please feel free to review the videos. They cover the first three chapters of the Rubin and Babbie book, 6th edition.

For your next class, please read the first five chapters of that book.

Be ready for a quiz on the material in the first three chapters.

We're voting to see what time to start on Tuesday, either 7:30 Central Standard Time, or 8:00. I don't care either way, and this year we have people on the east and west coasts, so it will be a toss-up I guess.

Email me your choice.

Linda

Tuesday, September 29, 2009

First Class at the Institute

Introduction to the Class (RP 1)

This introduction originally took place live at the Institute because I couldn't be there for a Sunday afternoon on-site class. That's why I refer to people having already having shared about themselves on Saturday.




Why Bother with Research? (RP2)





Are Social Workers Any Good At This? Our History (RP3)





The Philosopher in the PhD RP 4)

The scientific method is one thing, but there are other ways of knowing. Do we care? Well, hopefully you'll respectfully defer to the methodology of social science.



Okay, now, as a group, look up alternative epistemologies, other ways of knowing, and be prepared to tell me, next class, why they're inferior.

No disrespect.

Evidence Based Practice (RP5)

How the empirical clinical model, generally a single-system design, morphed into Evidence Based Practice, basing our clinical decisions upon interventions that work for particular populations, based, of course, upon research.




Compassionate Practice is Evidence Based (RP 6)

We talk about that, and ask then ask three totally unrelated questions:

1. What is the difference between a top-down and a bottom-up search?

2. What types of studies are randomized clinical trials and why do social workers rarely use them?

3. Is there a disconnect between qualitative research and evidence based practice? Why or why not?





Qualitative Research, Post-Modernism, and Paradigms (RP 7)

I tell you, in this video, that the majority of the studies done here are qualitative, but that might no longer be true. It seems many students are doing mixed-methods dissertations.

The big debate in social science research is whether or not qualitative methodology or quantitative methodology is superior, more "scientific."

All of the social sciences have their own particular paradigms to explain human behavior.





Social Work Research Paradigms (RP 8)

We're back to is one paradigm better than another, and come to the conclusion that it is what you are studying, whom, and why that makes that determination.

The six paradigms that social scientists use to guide how they conduct research. The symbolic-interactional, functionalism, conflict, positivism, interpretism, and the critical social science paradigm.

Friday, September 25, 2009

First Assignment

Well, there has to be some preparation, right?

You can come to class cold, not read anything in preparation for this Sunday's class if you want. But you can't avoid it forever and you'll get more out of it if you read the first three chapters in Rubin and Babbie's 6th edition of Research Methods for Social Work.

Class will be presented by video, and I skip a lot and speak a little fast sometimes. So if you want to learn and reinforce what you're learning on your own (one way of learning) then read ahead of time. If there's no time, such is life.

The topics we'll cover (first three chapters of the 6th edition) include:

Why Study Research?

Evidence Based Practice

and Philosophy in Social Work Research, or philosophy and theory.


By the time you get to class on Sunday the videos of me teaching will be posted.

I think you should watch them as a group. There's class discussion to follow on some, and preparation for a quiz that you'll have after the first half hour of your second class (not at the Institute, but on-line).

I'm really sorry that we can't meet in person.

Linda

Thursday, September 10, 2009

Syllabus for 2009

RM 512 Research Process, Distance Learning
Fall 2009
Linda Freedman, LCSW, LMFT, PhD
Office: 773-271-7111

Distance Learning students are scheduled for two sessions on-site at ICSW.
Regretfully, I’ll be unable to meet with students first class, Oct 4, 2009 at the Institute but will put up video on our class blog, http://icswrp.blogspot.com. Please read the first three chapters in the Rubin and Babbie book. If you don’t have time, read them after you hear the lectures. But please read them.

Our usual time will be Tuesdays, 7:30-9:30 pm. (I think.)

Attendance:

This course is taught virtually in a lecture/discussion format on the web.

Be prepared to participate, please because class participation will make up 20% of your grade.

For students who miss more than one class session, except in a documented personal emergency, the overall course grade will be lowered one level. Students who miss more than two class sessions will automatically fail the course. In cases of personal emergency, the student will be asked to withdraw from the course and retake it the following academic year.

Required Texts & Readings

American Psychological Association (2009). Publication manual of the American Psychological Association ( 6th Ed.). Washington D.C.: Author.

Rubin, A. & Babbie, E. Research methods for social work (2nd or 6th
Eds.). Pacific Grove, CA: Brooks/Cole Publishing Co.

Levitt, S. D., Dubner, S. J., Freakonomics: a rogue economist explores the hidden side of everything Rev. ed. paperback.

Recommended:

Fortune, A. E. & Reid, W. J. (1999) Research in social work. New York: Columbia University Press

Gorey, K., Thyer, B. & Pawluck, D. (1998). Differential effectiveness of prevalent social work practice models: A meta-analysis. Social Work, 43(3), 269-278.

Oakes, J. (2002). Risks and wrongs in social science research. An evaluator's guide to the IRB. Evaluation Research, 26(5), 443-479.

Myers, L. & Thyer, B. (1997). Should social work clients have the right to effective treatment? Social Work, 97(42), 288-298.

Strauss, A. & Corbin, J. (1990 or later edition). Basics of qualitative research: Grounded theory procedures and techniques. Newbury Park, CA: Sage.

Locke, L.F., Spirduso, W. W., & Silverman, S. J. (1993). Proposals that work. Newbury Park, CA: Sage.

Institute for Clinical Social Work web page – http://www.icsw.edu

Additional readings are on reserve at the Laura Kramer Fischer Library. Additional readings are also electronic articles available on the web or through library resources, including inter-library loan.

Please note: Required readings and assignments will be posted on the class blog, along with an updated syllabus. It is your responsibility to check often for revisions. You can always email me if you aren’t sure of a class meeting or time. Here’s the URL to the blog. http://icswrp.blogspot.com

Course Description

The purpose of this course is to continue to provide incoming PhD students the opportunity to familiarize themselves and become comfortable with the research process, particularly the doctoral research process.

The underlying aim is to assist students as they design mock-up research proposals. In class, and on the final, students will:

(1) explore problems important to the field of social work

(2) pose research questions about such problems

(3) suggest hypotheses based upon theory

(4) define and operationalize variables

(5) define measurement strategies

(6) propose data collection methodology

(7) know how to present and disseminate research findings.


This course will not provide students with expertise in any one-research area, but will provide a good foundation for further study and education. The hope is to promote flexibility in future research endeavors.

Learning will be a collaborative effort, drawing upon the experiences and expertise of all class members—. 40% of your class grade is based upon weekly quizzes that you will work on together in class time. If you miss the class, you lose these points. I’m sorry.

Other Learning Objectives:

Upon completion of the course, students should be able to:

• Promote critical analytic skills for developing, implementing, and critiquing research
problems and questions appropriate to all levels of practice, including practice at work sites.

• Select appropriate quantitative and qualitative approaches to guide research on a particular topic, including the use of available data, experimental and quasi-experimental designs, surveys, intensive interviewing, and participant observation.

• Implement procedures for assuring the ethical conduct of research, including the necessity of obtaining informed consent; inclusion of safeguards to insure the confidentiality of research data; assurance of voluntary participation in research; and an appreciation for not using vulnerable populations as research subjects just because they may be more available.

• Use current technology, including the Internet and a variety of existing social science and social work databases for understanding specific human conditions and biopsychosocial interventions.

• Design studies that contribute to knowledge about social work clients, practice, and policy.

• Critique existing research in terms of its ability to rule out other possible explanations for findings.

• Critique existing research in terms of its relevance and generalizability, particularly to women, racial, ethnic, other minority groups, and people from different socioeconomic classes.

• Evaluate research according to principles of social justice, cultural competence, and utility.

• Develop procedures for coping with organizational and sociopolitical issues in agency-based research concerning such issues as how research projects get framed to how data access can be affected.

Course Expectations:

Students are expected to complete assigned readings in advance of class meetings.

We won’t necessarily discuss the readings in class, but you are expected to know the material. For example, you will read the entire APA style citation manual, and you should be able to use it, but we’re not spending time in class on it because it is self-explanatory. But you may be asked to use it on the final.

All students will be held accountable for adhering to academic and nonacademic standards of conduct as described in the ICSW Student Handbook, available on the ICSW website.

Accommodations for Students with Disabilities:

Accommodations will be made for students with disabilities. Students needing accommodations for any type of disability must do the following:

1. Go to the ICSW Office of Disability Services to obtain confidential verification of the disability and a statement of accommodations recommended by that office.

2. Show the ICSW Office of Disability Services accommodation letter to the instructor of the class for which the student requests accommodation.

3. Show the accommodation letter to the instructor at the beginning of the course or before the start of the course.

Questions and Concerns:


I am willing to discuss problems about course work during the week. Please do not try to contract me on Friday nights or Saturdays (the Jewish Sabbath) or Jewish holidays, even if you have what you think is an emergency about a grade. I will understand and make exceptions for you about assignments if you email. Do not hesitate to ask for clarification about anything having to do with this class.

Assignments, Tests, & Grading

The only assignments are your readings.

Class participation 20% of the grade

Class quizzes, 40%

Finals at the end of each semester, 40%


CHECK THE BLOG FOR CLASS SCHEDULE AND ASSIGNMENTS.
COMMENT HERE TO LEAVE ME YOUR EMAIL ADDRESS. THANKS!

Tuesday, June 16, 2009

Testing, testing

ALMOST EVERYTHING FROM THE PAST COUPLE OF DAYS HAS BEEN TAKEN FROM THE BOOK Research in Social Work BY: WILLIAM J REID AND AUDREY D SMITH

Those of you who do add a quantitative piece to your research will be picking and choosing between statistical tests.

The simplest test to measure differences between two groups is the t test.

If we randomly assigned people in an agency to a treatment and a control group to test a marital therapy intervention, we would use the t test in several different ways.

First we might decide to see if the two groups differ much. Just because we've assigned randomly doesn't mean that the characteristics of the two groups will be very well matched. So we could compare age, number of years married, and number of children. If the means tests on any of these variables found one of them to be significantly different, we might want to redo the assignment, perhaps use matching.

Then we might do a pretest of all of the subjects of the experiment to see how both groups fared on the dependent variable. Let's make that marital satisfaction.

Then after the intervention was performed we would see if the pre-test/post-test scores for each group made any significant changes. Then we could compare the changes between the groups to see if there was a significant difference between them.

Very elegant, no?

If we're testing nominal data then we frequently use a contingency table and test for significance with the chi-square test. Instead of comparing means, as the t test does, the chi-square compares frequencies. The greater the difference in observed frequencies from an expected chance value, the greater the chi-square value for each cell.




Above is a table and results of a significant chi-square test
If you tested three groups against a control group, used three different types of marital therapy and tried to determine which was the best, then the t test wouldn't work for you. You would need to use an f test, perform Anova (analysis of variance) or regression.

We'll discuss this in a different post, the one below.

The Meaning Behind the Measures of Association

The strength of an association is expressed in numbers between -1 and +1.
And the minus sign makes a difference, tells us that the direction of the data is negative, that there’s a negative association between the variables. More of one variable indicates less of another.

The Pearson r is the statistic we use for interval and ratio data, so you will see it in much of the literature you read.

The r, a measure between -1 and 1, shows the direction the variables are taking when they present together in a sample, and the correlation. The r predicts, too, meaning once you know the r, if you know the value of one variable, you can figure out the value of another.



Any association close to -1 or +1 is a strong association. So a negative correlation, like a -.95 is a strong correlation, stronger than a positive correlation of only .13.

95% of the variance is associated with the two variables, as opposed to thirteen percent. _.95 tells us that when you see one of the variables, you'll probably find the other variable. The direction doesn't matter, they're still strongly associated.

Whereas an r of .13 means that if you find one variable, there's only a thirteen percent probability that you'll find the other.

To find out what portion of variation is explained by another variable, you square the correlation. So a Pearson r of .80, a strong relationship between variables, when squared, indicates to us that .64 or 64 percent of the variation is explained by that relationship.

Something else, some other variables are explaining the rest of the variation.

Now you're into multivariate data analysis. Adding a third variable, we can partial out which variables affected the others by taking them two at a time. The computations are complex, but ultimately give us information about how much each variable contributed to the results of our statistical enterprises.

We ask our computers to spit out what are called correlation matrices. Below is a correlation matrix. The study in question looked for reasons parents had for visiting their foster children. Very low correlations between the reasons on the left and the amount of parent contact indicate that those variables didn't contribute very much to a parent's reasons for visiting.

In this table only three of the reasons on the left showed any contribution at all, Years in Care r = .256
Reason placed (child's behavior) r = .265,
and the parent's discharge objective, (wanting to take the child home), r = .387.

The little note under the table about dichotomous variables tells us that these were nominal variables "dummy" coded with 0's and 1's.

It is the magnitude, not the significance that matters to us in regression, one way to analyze multivariate data. Regression helps us parse out the contributions of independent variables to the dependent variable(s).



The "beta" is called a beta weight, a standard partial regression coefficient, sometimes called a standardized regression coefficient.

The b weights are standardized (this discussion isn't finished)

The one versus the two-tailed test

Obviously, what we're shooting for is to be able to reject a null hypothesis so we can say our research hypothesis is correct, or might be correct, at least is potentially correct.

So the bigger the rejection area (for we're testing to reject the null hypothesis) the better. That's why social workers tend to set their alphas at .05, not .01.

Another way to do this is to use a one-tailed test. In the two-tailed, non-directional tests, the rejection area, that five percent, is divided into .025 and .025. We're testing whether or not the intervention helped, for example, or perhaps if it hurt.

There isn't always a purpose to see it something hurt, so we can use a directional, as opposed to a non-directional test. This way we set the alpha at .05 and the rejection area seems larger because it's all at one side of the curve, 1.65 SD from the mean, as opposed to 1.96 SD in a 2-tailed test.

That increased the power, or the likelihood the test might reject the null, by increasing the size of the rejection region.

Type I and Type II Errors

This goes along with hypothesis testing.

If you've rejected the null, found reason to believe that your research hypothesis might have some truth to it, but in reality you've made a mistake, then you made a Type I error.

If our test statistic falls into the rejection region (p < or = .05) then we are saying that 19 times out of 20 we're correct in our decision to reject the null. But we're going to wrong 1 time in 20, 5 percent of the time.

To make sure we don't do that, we can decrease the rejection region, make it .01, only 1 chance in a hundred that we reject the null by mistake.

We call the probability of making a Type I error alpha. The probability of making a Type II error is beta.

A Type II error occurs when we accept a false null hypothesis.

The ability of a statistical test to reject the null hypothesis when it is false and the research hypothesis is true is called the POWER of the test. The best way to increase this power is to increase the size of the sample.

Hypothesis Testing

Why does this work, testing variables to see if they come from separate populations?

We get probabilities that enable us to say that our findings are significant because we believe that all things fall out, for the most part, normally. If I take an infinite number of samples from a population and test for any one thing, then make a frequency distribution of all of the findings, that distribution will look like a normal curve.

This is called a sampling distribution, and it is usually of the parameters, any parameter (mean, median, mode, SD, variance).

Test results of infinite samples will show that half of the samples will fall out on one side of the average, that center line that bisects the top of the normal curve and the horizontal line, (see why you once had to learn geometry, so you could know the word bisect) and the other half will be on the other side of that center line.

So every time you test a sample, you can compare your results by comparing them to what would be the theoretical average, the norm, the normal curve. You're comparing two means, or two averages, to see if the differences are so great that they probably don't belong in the same sampling distribution.

You either took a sample and divided it up into two groups to intervene in one and not the other and compare, or you are merely testing one group to see if variables within that group are associated with one another).

Your null hypothesis is that the difference between these means is minimal, they are so alike that they must come from the same population. What you did, or what you found, is no big deal, 95-99% of the population is this way.

If you find huge differences, however, that perhaps only 5% of the sampling distribution would have the trait you are testing (attitude, eye color, anger) associated with another, then this five percent is from another distribution, one in which 95-99% of the people in it are just like this.

You've established a new population, basically. What you did mattered, it is significant that two things are associated. Either what you did made the difference, or these things really do hang together in some normal distribution. Either way, the likelihood is greater, that under certain conditions, your findings are not due to chance.

On the other hand, assuming there is a normal distribution for what you found, in every population there is still a 1-5% chance that chance is the reason for your findings.

Now how confusing is that? Very confusing. Do your best to let it sink in.

Thus we test the null, that what is likely to be found in the universe, based upon infinite samples that theoretically fall into normal distributions, to see if we can reject it, the idea that the averages, the means of the variables we're testing are really the same in two groups.

But if they are different, then we can reject that hypothesis, the null hypothesis, which supports our research hypothesis.

Our research hypothesis is that the difference, actually, between the groups will be great because what we did, our intervention, made a big difference. There is only a 5% (or less) likelihood that there would be such a big difference in the parameter we've tested (usually a mean) were it not for the intervention. You've ruled out chance with a significance test.

The significance test compared your results with what should happen in the universe if there were no differences between the two groups, if they came from the same sampling distribution. You've tested the independence of the means.

The normal sampling distribution has one statistic, an average statistic. Your sample is compared to that. If your results vary significantly, then there must be something you did that increased the likelihood that this would happen.

You knew that, which is why you theorized there would be some association between your intervention, your independent variable, and the dependent variable. You had a theory about why this would happen when you compared groups or variables in the first place.

Sunday, June 14, 2009

Univariate, Bivariate, and Multivariate Data

In other words, data analysis all depends upon how many variables you're analyzing

Univariate analysis is about analysis of one variable. Usually we'll find descriptive statistics (measures of central tendency), or measures of variability, i.e. the range, SD and variance. We'll often make frequency distributions to see how often a certain variable popped up in the data.

Bivariate and multivariate analysis attempt to do more than describe. They explain relationships between one or more variables: which variables are related, and how.

Bivariate implies an analysis of only two.

Multivariate analysis helps us to specify the conditions under which relationships hold. We simultaneously analyze relationships between variables.

Often we use cross-tabulations to tabulate the joint occurrence of 2 or more variables. The result is called a contingency table.

Employed Not employed Total
Men 75 25 100
Women 25 75 100

Total 100 100 200


Here you can see that with univariate analysis, you would have found that there are 100 men, or 100 women, or 100 are employed, and 100 are not employed.

A bivariate analysis allows us to say that three-fourths of the men are employed, and only one-quarter of the women are employed.

A contingency table like the one above allows you to “see” data. Another way, some might consider a more refined way to look at of looking at is to create a scatterplot. You should be able to identify one of these.

Read the post below to refresh your memories

Scatter Plots

Courtesy of UIUC, a direct copy of scatterplots">this link

Scatter plots are similar to line graphs in that they use horizontal and vertical axes to plot data points. However, they have a very specific purpose. Scatter plots show how much one variable is affected by another. The relationship between two variables is called their correlation .

Scatter plots usually consist of a large body of data. The closer the data points come when plotted to making a straight line, the higher the correlation between the two variables, or the stronger the relationship.
If the data points make a straight line going from the origin out to high x- and y-values, then the variables are said to have a positive correlation . If the line goes from a high-value on the y-axis down to a high-value on the x-axis, the variables have a negative correlation.




A perfect positive correlation is given the value of 1. A perfect negative correlation is given the value of -1. If there is absolutely no correlation present the value given is 0. The closer the number is to 1 or -1, the stronger the correlation, or the stronger the relationship between the variables. The closer the number is to 0, the weaker the correlation. So something that seems to kind of correlate in a positive direction might have a value of 0.67, whereas something with an extremely weak negative correlation might have the value -.21.

An example of a situation where you might find a perfect positive correlation, as we have in the graph on the left above, would be when you compare the total amount of money spent on tickets at the movie theater with the number of people who go. This means that every time that "x" number of people go, "y" amount of money is spent on tickets without variation.

An example of a situation where you might find a perfect negative correlation, as in the graph on the right above, would be if you were comparing the speed at which a car is going to the amount of time it takes to reach a destination. As the speed increases, the amount of time decreases.

On the other hand, a situation where you might find a strong but not perfect positive correlation would be if you examined the number of hours students spent studying for an exam versus the grade received. This won't be a perfect correlation because two people could spend the same amount of time studying and get different grades. But in general the rule will hold true that as the amount of time studying increases so does the grade received.

Go to UIUC (link at top) to see some examples. Most scatter plots will not show perfect correlations.

Measures of Association

Aside from describing variables and relationships, and noting central tendencies, or the degree of variation in samples, we also use numbers to describe the magnitude, or strength of relationships.

We use different statistics to express how associated variables might be, depending upon the type of data we’ve used in our sample, nominal, ordinal, interval, or ratio.

Measures of association for nominal data include lambda, tau, the phi coefficient, the contingency coefficient, Yule’s Q, and Cramer’s V.

For ordinal data we use gamma, Kendall’s coefficient of concordance (W), Somer’s D, and Spearman’s rank-order correlation coefficient.

Linear relationships between 2 variables on interval or ratio scales use Pearson product moment correlation coefficient ® or the coefficient of determination (r squared).
If data is not linear, we use eta.

Inferential Statistics

Now let’s continue with the logic of inferential statistics, what we started in class on June 7. It is always possible that chance is responsible for our findings. You might remember that if we take random samples, then they are going to differ because of chance. But if we’re taking them from the same population, the differences will be small.

So if I take random samples off of a list of social workers who are members of NASW, then compare them for certain attitudes, it is likely the attitudes will be different, but if we’re taking large samples, the differences will be small. These people are really from the same population.

But if my results are really different, if the attitudes vary greatly, then we might wonder if the samples came from different populations. The differences in the samples might be attributed to training, for example.

In the same way, if we find ANY large differences in samples that we’re comparing, the differences are less likely to be by chance. They are therefore significant. If we intervened in a sample in some way, perhaps by education or some other intervention, and the outcomes differ largely, we have reason to believe that what we did caused the change in the sample that got the intervention. Now we say that THAT sample no represents a different population, the population of people who have had the intervention.


That’s what most tests of significance are seeking to find. Is the sample from an entirely different population? And why. We hope it’s because of something we did.
The question tends to be, how big a difference does it take to say that the outcome isn’t by chance, rather is because of what we did? Or what we suspected?

If you’re testing a group to see if they went to work within the first six months of their child’s life, you might also check out what kinds of jobs they’re hoping to get, or how much education they have. A hypothesis might be that women who go back to work in the first six months have prospects of better jobs because they had more education.

If this is true, if your sample shows that there is this strong likelihood, then we say that these women come from a different population from the ones who did not go back to work, a better educated population.

If there’s only a small difference in the two groups, the one that went back to work and the one that didn’t, then we say that small difference was due to chance. The groups are too alike.

The test will tell us that there is only a .05 or less chance that the relationship we found is due to chance. Then we’ll say, must be a significant relationship, here.

The null hypothesis and the research hypothesis

We don’t know, unfortunately, the population parameters of variables we wish to study. If we did, there would be no need to test samples. What we’re doing, when we measure samples, is trying to infer to the population from our sample’s results.
Tests of significance are simply called statistical tests.

Commonly used test statistics include the t, the F, and the chi-square. The distributions of these tests aren’t exactly normal, but there are tables, neverltheless, for determining the critical areas under their curves and the probablilities that are associated with them.

There are four components to a statistical test: a null hypothesis, a research hypothesis, a test statistic (decision maker) and a rejection region.
The null hypothesis is a hypothesis about the population (from which we’ve drawn our sample) that asserts that sampling error explains the results. Our results are from sampling error, not the intervention.

We have two groups, one with an intervention, and one without. We test both of the groups and then compare the results. If the differences are small then it is possible that the two groups are now representing different populations. Our intervention caused the difference, and it’s such a large, significant difference, that we have to reject the null hypothesis that the groups are from the same population. We’re testing to see if they’re independent of one another.

These tests are often called tests of independence.

You don’t need two groups. You can be testing one sample to see if variables are associated. In this case, we’re asking, are these two variables from the same population? If they are not associated, if there’s an insignificant association, a low r, for example, then we say that the variables are not independent, they are from the same sample. We don’t reject the null.

The null hypothesis is saying that the two variables are from the same population.
We do that test and that test is how we determine if we can accept the alternative hypothesis, that which we call the research hypothesis.

The null is that two variables are from the same population. If we accept this we’re saying the difference in the two v’s is due to chance.

The research hypothesis is that the two variables are from different populations.
When we reject a null we automatically accept the research hypothesis. The difference is not due to chance. Our theory about why the two variables are associated significantly is supported by the research.

The null hypothesis is that the mean of the first group is the same as the mean of the second group. So if the means differ significantly, we reject the mean and accept the research hypothesis that says that the two samples are from different populations.

What this distills down to is you doing a test, getting a result, and checking a table to see if that result is significant or not. Sometimes the result is the difference between two averages, you might be comparing the means of two different groups, using a t test for two variables, an F for three or more. Sometimes the result indicates how much two variables are associated with one another, how high or low the correlation, or r.

The table will say that the test statistic is < than or = to a particular number, and you will interpret the statistic based upon what you set your level of significance to be. So if you say that you want the odds of your results to be over 95% that you will have results that are significant, then the rejection statistic will be < or = to .05. that < or = finding is the likelihood that your results were due to chance.

So if the finding on the table is .11, then you would not reject the null. .11 is not in the rejection region of your curve.

But if it is .03, then you do reject the null. There is little likelihood that chance is responsible for your findings, only 3 times out of 100 will you get your result. It’s probably due to your intervention, or if you’re testing to see if there’s a relationship between variables, there’s reason to believe that they are associated, that it is not by chance, it is a real phenomenon in the world.