Frequently Asked Questions.
The Early Years DAWBA (2-4 years old) has been used less extensively than the standard DAWBA, but the initial findings are encouraging and several researchers are currently carrying out analyses that should lead to publication.
Professor Tamsin Ford and her colleagues are carrying out some of these analyses in conjunction with a Norwegian group. The findings linked to to the recent government survey (here is the link: https://digital.nhs.uk/data-and-information/publications/statistical/mental-health-of-children-and-young-people-in-england/2017/2017).
We are hoping to improve and relaunch the 18+ SDQ this year. Even then, we will have to wait until others do research on populations to understand what adult norms may be. We suspect for young adults (18/19 - 26), they are not that far from the norms for under 18s.
As regards the SDQ, Multi-informant SDQs are an excellent guide to "caseness" (i.e. predicting whether a child or adolescent has a significant mental health difficulty), but is weaker at predicting exactly what sort of disorder that is - weaker but not useless: if the child meets caseness criteria, the SDQ provides useful information on whether the disorder is likely to be behavioural, or emotional or autistic in type. However, diagnostic predictions from the SDQ are less strongly predictive than DAWBA-based predictions, and are most likely to be predictive at a group rather than individual level.
My recommendation is to use a stepped assessment where possible, beginning with the SDQ and proceeding to a DAWBA if the SDQ is positive [for mental health problems]. As we increasingly recognise how common child mental health problems really are, how persistent they can be, and how much they damage children and society in terms of suffering, economic cost and restricted futures, the case for stepped assessment becomes increasingly persuasive.
To return to your original question, expecting to identify mental health disorders accurately and cheaply with a single screener is generally unrealistic. Identifying high-risk cases with the likes of the SDQ is realistic, but characterising the type (or types) of disorder will generally require a more detailed supplementary assessment such as the DAWBA. Such a stepped approach is affordable in the UK - what is not affordable is to blight children's lives unnecessarily, costing more in the long-run. The fact that we commonly do so is a sad reflection of the low priority of mental health (despite fine but empty promises to the contrary)!
However, symptoms are not the same as functional impairment or even distress - and if it is critical to you remit that you measure impairment and distress directly, we suggest that you use the SDQ along with its very brief impact supplement Attachment
You raise the question of cut-offs, and there are indeed well-documented SDQ cut-offs - but we would caution you that people sometimes take these too literally. We humans like cut-offs since it simplifies our world to have good scores and bad scores. Sadly, reality is more complicated : on careful inspection, there is a continuous gradient from good to bad scores, without any step-wise transitions. The SDQ has particularly good dimensional properties, with every point increase in the total difficulties score proportionally raising current and future rates of psychopathology.
One option is to ignore this dimensionality and draw an arbitrary line in the sand - and this, to be honest, is what many SDQ users opt to do, and they mostly seem happy with the simplification.
However there are many statisticians who argue that a better response is to embrace dimensionality and not get too hung up on how many individuals are above or below some arbitrary cut-off, and to focus instead on the group mean. This is statistically more powerful, with the mean SDQ score of a group in the population being highly predictive of the mean rate of mental health disorders in that same population. We like this approach, but have to concede that dimensions can sometimes be sometimes be harder for some clinicians and policy makers to interpret- in which case reversion to traditional cut-offs may be the needed.
There are many plausible impairment scales, it is perhaps worth pointing out that there are advantages to using the SDQ to measure both symptoms and impairment since there are long established SDQ algorithms for predicting diagnoses from the combination of symptoms and impairment (in line with DSM guidelines, e.g. for the diagnosis of ADHD)
Secondly, it is relevant that the adolescent self-reported SDQ includes an single item on whether the respondent thinks they have difficulties in any of the following areas: emotions, concentration, behaviour or being able to get on with other people. This is a very transparent question that is unlikely to raise ideological hackles. Policy makers who may be inclined to dismiss complex questionnaire scores find it easy to see the relevance, say, of the finding that 10% of the youth in their district think they have definite or severe difficulties with emotions, concentration, behaviour or being able to get on with other people.
There was no evidence of threshold effects for the SDQ at either high or low scores, but rather the odds of disorder increased at a constant rate across the range" Any decrease at the group level is good, and bigger changes are proportionately better.
In a given population, the proportion of children with a disorder is closely predicted by mean symptom score (JCPP paper attached). Many clinicians want a cut-off between what is good and what is bad, but for mental health as for so many other things, there is a continuum without any sharp cut-off.
Many parents like to see how their child's SDQ scores compare with those of other people of the same age in key domains such as behaviour, emotions and impact). Colour coding scores into different bands for "close to average", "high" or very high work well for some people, giving them an instant overview.
However, other people find computer bands a bit mechanical and prefer it when completed SDQs are used not as an end point, but rather as a starting point for further exploration and dialogue.
For example, "Looking over the answers you have given about your son, it seems as if he has a variety of emotional symptoms that are both upsetting him and and interfering with his getting on with his everyday life - including how well he gets on with you and the rest of the family. But a brief questionnaire can be misleading, making things seem better or worse than they really are.
You are experts on your son - in his case, do you think we need to be thinking further about stresses affecting him and what - if anything - could or should be done about that? Perhaps nothing needs to be done, perhaps it would be good to repeat the questionnaire in 6 or 12 months, or perhaps the time has already come to look further into what options are available. What do you think?" That's just an example - the key lies in flexible responding based on training and experience.
When the mental age is well below 2, it is more likely that respondents will find the questions inappropriate, or hard to interpret. The more complex the special needs, the more likely it is that the DAWBA will be appropriate: this includes the SDQ as well as sections on problems such as autism and repetitive behaviours that are more likely in children in special schools.
The SDQ and DAWBA are increasingly widely used in special schools and with children who have cognitive impairment. For example, they have been very successfully used in a large study at Great Ormond Street in the UK for the evaluation of emotional and behavioural problems in children with chromosomal abnormalities (copy number variants)
In terms of using “three points” as a cut-off, and whether this is clinically significant: I’m afraid we have never defined what we consider to be “clinically significant” on the SDQ, although it is possible that some other users have done (if you wanted you could search for relevant papers on our SDQ journal article repository http://sdqinfo.org/py/sdqinfo/f0.py). To put that value in context, in the national British Child and Adolescent Mental Health Surveys of 1999 and 2004, the standard deviation for self-report age 11-16 was 5.2 SDQ points, and the standard deviation for teacher-report age 5-10 was 5.9 points. We don’t have normative US data for self-and teacher-report on our website (again, this may exist in the literature), but instead only have parent-report (http://sdqinfo.org/norms/USNorm.html). This indicates a standard deviation for the total difficulty score of 5.7 for 4-17-year-olds, very similar to the corresponding value of 5.8 for 5-16-year-olds found in the British national study. Putting these things together, an increase of around three SDQ points would probably represent a change of just over half a standard deviation – which would seem like an important change from a practical and public health point of view. (In terms of the issue of statistical significance, this would obviously depend primarily on your sample size)
As regards using a percentage threshold-based measure rather than a change in mean score: Is there a reason why you prefer this, as opposed to comparing the mean score on the population before and after? That could be a good alternative, as we have previously shown that every one-point increase in the SDQ does correspond with a decrease in the prevalence of disorder (https://www.ncbi.nlm.nih.gov/pubmed/19242383). As such, our argument has always been that any significant decrease in the mean SDQScore can be assumed to correspond to an improvement in the underlying mental health of the population. The advantage of this approach would be it would have potentially greater statistical power, and remove the need to select an inherently arbitrary threshold.
If you do decide to go for the “three points” approach, we believe you will need to consider the proportion of children who have had a deterioration in their SDQScore alongside the proportion who have improved. We would expect a non-trivial proportion of students to both increase and decrease their SDQScore’s by any given amount (e.g. three points) due to a mix of random factors (e.g. how the child feels on a particular day) and genuine “churn” in the population such that any time some children have improving mental health and some have deteriorating mental health. As such, knowing that e.g. 10% of children improved by at least three percentage points would not by itself seem very interpretable to us , without knowing what the corresponding proportion was who decreased by that amount. This could be another argument for focusing on the change in the mean score, and then perhaps segmenting that mean change into proportions of the population increasing or decreasing by a given amount (e.g. change by: <=-3 points, -2 to -1 points, 0 points, 1-2 points, >=3 points).
One final thing: You don’t say over what time period you are conducting this study. If it is e.g. over a full school year then using standard SDQ twice would be fine, and maybe the simplest thing to do. However if the follow-up period is less than six months, there is a bit of an issue in that the SDQ has a six-month reference window, so your follow-up period would be including some of the pre-intervention time as well. One option that you can consider would be using our “follow-up SDQ”, which has the same questions but has a one-month reference period and also a couple of additional questions about whether the intervention was helpful. The standard wording, in the versions on the website (http://sdqinfo.org/py/sdqinfo/b3.py?language=Englishqz(USA)) the wording is e.g. “since coming to the clinic”, but we could modify this for you to language of your choice, e.g. “since participating in the healthy schools programme”.
In SDQscore (the traditional, paper-based scoring system), each interview gets a unique, persistent ID (we call this the IID). The IID is linked only to your account; you must provide mechanisms to associate IIDs with individuals. SDQscore can be used via SDQplus.
Self-entered and email-able assessments via SDQplus are US$1 per informant; all reports are included. Paper-based SDQ assessments are US$0.25 per informant scored, plus US$0.25 per optional PDF report. At present, there are no charges for registration, assessment storage, exports or software for any SDQ services. Self-entered assessments and paper-based assessments can mixed for individual clients.
Organisations typically are paying US$0.50 per assessment. Users are asked to confirm prices as they are incurred. The reason for the second confirmation is the optional PDF ... if you want to save cash costs, print just the HTML or take a screen-shot and save it; the same information is on the HTML report that is included with the score. The PDF is more attractive and people find it easier to save.
I recommend that all SDQ users you have a look at SDQplus. If you stick with paper-based assessments, your costs need not rise but you will find that the SDQ is much easier to use and gives you a lot more information.
In our experience built up over decades, the best way to do this is NOT to insist on a fixed or random order but to encourage respondents to start by looking at the menu and completing the section that seems most relevant to them or their child - and then go on to the section that seems to them to be the next most relevant to them or their child, etc. That is why the instructions that are presented along with the menu of options include:
Choose the next topic For more information on how to choose, click here. When they click, what they see is: Choosing the next topic When you go back to the list of colour-coded topics: Red topics are the ones you haven't yet answered. Please choose the next new topic that seems most relevant to you/ your son/Your daughter.
In our experience, this generally works well. If your respondents are not reading the instruction to focus on answering the most relevant sections first (and, to be honest, most humans are not good at reading instructions), then perhaps your team could also tell them individually before they start completing the DAWBA about the desirability of starting with the most relevant sections. It would be even better if your team were also able to reinforce another point that we do already make (but that they might not read): Even if none of the new topics seems particularly relevant, it would help us a lot if you'd work your way through them anyway. If there's nothing much to say, you will find that they are very quick to answer. (This is true because of the way the skip rules work)
"steals from home school ..." again, because of age differences, stealing is less relevant (and spiteful more relevant) to preschoolers
The SDQ is a fully-structured measure that is designed to be self-completed: and as such it does not need any training in the person administering it, they can just hand the paper out. (Indeed, if the practitioner were actively helping the person completing the questionnaire to choose the right answers then that would make the responses no longer comparable with the norms provided - and so this is not advised)
Some practitioners do decide to use the responses on the SDQ as a starting point for further discussion. But that is not part of the SDQ per se, rather practitioner decision."
We only have one set of SDQ cut-offs that we use, not a variable set by country. And in any case, the US norms are very similar to the UK norms - see this article: https://www.sciencedirect.com/science/article/abs/pii/S0890856709616312
We generally try to discourage people from getting too hung up on the precise cut-off numbers for particular regions, since really the SDQ score cut-offs are somewhat arbitrary, and the scores should be interpreted as dimensions.
The possibility of adding the UserID into the export of SDQplus data will be explored, but there is a complication that UserIDs do not “own” PlusIIDs, even if they may have created the PlusIID. The PlusIID “belongs” to the AccountID and may be used by other PlusIIDs in the account; this is consistent with normal usage in clinical organisations where patients are the responsibility of the organisation (consultant, firm or unit) but may have contact with many care-givers within that organisation."
Adding a "full list" export to SDQcohort is quite appropriate and in that case "cohorts" of any type can be created and exported via SDQcohort.
With respect to the SDQplus export, I do have a solution that will tell you how many times the UserIDs of the AccountID have accessed any particular PlusIID
It is then up to you to decide which UserID you want to associate with the particular PlusIID. The exact dates of access are also available, but I think that would make the export unnecessarily complex. Perhaps in an API it would be more useful, but I will wait for someone to ask for it before I make it available.
As I say, I'm reluctant to make or encourage making a 1 to 1 association between PlusIID and UserID. That oversimplification might suit initially, but think about the 10+ year persistence of a child in a school doing biannual assessments. The UserIDs almost certainly will change over that period as they are assessed by different teachers and staff.