Many years ago, a researcher came to me with a simple scale of over all well being. The first question asked how well over all they felt and the rest asked about performing specific tasks such as climbing stairs. The aim was to measure overall well being. She was investigating the response of a specific group of patients who had leg injuries from motorcycle accidents. The scale did not seem to behave as expected. Upon investigation I found the first term was routinely scored as doing fine while the rest could have quite negative responses. This was because the group heard the general question as having the words "except for you leg" in it, even though the words were not there.
If the culture of the population you are investigating is different from the one that the Measurement Scale was developed on then you need to check you can apply it. This is particularly true if you are translating the survey and that even if you back translate to check equivalence. No matter how careful your translation is if the topic you are dealing with has been conceptualised differently then you will have trouble with the scale.
There are a number of things you can do to check it. Firstly you can simply run a Cronbach's alpha, and if this fails then you know you are in trouble. If you want to be more thorough you can carry out a Confirmatory Factor Analysis (CFA), but what you should not do is just assume that ideas from one setting can translate unproblematically in another.
I came across another one this week. The person was looking at the way disability was viewed in this country and in a Middle Eastern Country. Both the CFA, Cronbach's alpha and exploratory factor analysis suggested problems. This was found after she had conducted two large surveys and from what I can see there is also an intervention in one survey. Plus she has sampled from several subpopulations such as health care professionals and parents of disabled children.
In some ways she is fortunate, the analysis conclusively demonstrates that disability is constructed differently in the two cultures. However it is not enough to say they differ we also have to say how they differ and probably why the differ. The only clue to this is to go back to each item and compare it across the cultures. So not the fancy techniques but a huge lot of fairly simple statistics. Then you try and make sense of it!
Thursday, 29 August 2013
Thursday, 8 August 2013
Reviewers comment answers questions
I have had a tough paper on the go, it is the sort that drives you up one wall and down the other. Now this paper is from a designed study. A designed study normally takes an couple of afternoons to analyse. Not this one. I have probably spent a couple of weeks on it. We are at least on out tenth round of analysis. Indeed we sent of the paper to a journal knowing there were problems with it but also knowing we were going around in circles analysing it.
It was rejected. Well no real surprise there! However we got the reviewers comments back.
I should explain we have an expert on the topic who conducted the study but she is not writing the paper. She probably thinks it is too simple. The thing was we were having to classify people after they had been recruited to analytic group. This was although we deliberately went to recruit people with specific characteristics we found that people were unstable between recruitment and the trial. That is they swapped groups.
What happened was one of the reviewers comments stood out. It suggested an analysis we had not done and this was after we thought we had analysed data every which way. We did it and the analysis popped out with the results just as we felt they should. No playing, no cludging, straight there in broad daylight.
You see the reviewer had two things:
1)Distance, we had got too close to the data and could not see the wood for the trees
2)Wider Perspective, we were too narrowly focused on the design of the study and how to analyse it. The reviewer was able to say basically "now X might well be interesting" and put his/her finger on the one thing we were overlooking.
So to reviewers who spend the time to give useful review, thank you! Even when you do not recommend acceptance!
It was rejected. Well no real surprise there! However we got the reviewers comments back.
I should explain we have an expert on the topic who conducted the study but she is not writing the paper. She probably thinks it is too simple. The thing was we were having to classify people after they had been recruited to analytic group. This was although we deliberately went to recruit people with specific characteristics we found that people were unstable between recruitment and the trial. That is they swapped groups.
What happened was one of the reviewers comments stood out. It suggested an analysis we had not done and this was after we thought we had analysed data every which way. We did it and the analysis popped out with the results just as we felt they should. No playing, no cludging, straight there in broad daylight.
You see the reviewer had two things:
1)Distance, we had got too close to the data and could not see the wood for the trees
2)Wider Perspective, we were too narrowly focused on the design of the study and how to analyse it. The reviewer was able to say basically "now X might well be interesting" and put his/her finger on the one thing we were overlooking.
So to reviewers who spend the time to give useful review, thank you! Even when you do not recommend acceptance!
Subscribe to:
Comments (Atom)