1. Overview of the Crime Survey for England and Wales screener module redesign

The Qualitative and Data Collection Methodology (QDCM) team at the Office for National Statistics (ONS) were asked to redesign and conduct cognitive testing of the screener and victimisation modules of the Crime Survey for England and Wales (CSEW), which measure the incidence and prevalence of crime. This work forms part of the CSEW transformation programme.

This report details changes to the questions and structure of the CSEW screener module. This module asks the respondent to report whether they have experienced any of 31 offence types (including crimes against their residence, vehicles and person, as well as fraud) in the previous 12 months. Further questions identify which incidents will be asked about in detail in a following victimisation module. Answers in the victimisation module are used to assess which offence code is allocated, and provide estimates of the prevalence, incidence, cost and nature of crime.

Building on previous research, and following respondent-centred design principles, our changes aim to address known issues with the existing CSEW questions, and make them suitable for an online, self-completion mode. A summary of the main changes follows.

Multi-feature incidents are now identified prior to the victimisation module (see section 3.1)

A multi-feature incident (MFI) involves more than one crime happening at the same time. Unlike the existing CSEW (see Appendix 1a: Redesign options for new approaches to multi-feature incidents and repeat incidence), the redesigned screener module allows respondents to report all offences they have experienced in the screeners, including those that occurred within an MFI. Questions then identify any offences that comprise an MFI and the priority offence on which to focus the victimisation module.

Prioritisation of the incidents to be taken forward to the victimisation module has been improved (see section 3.3)

It will now:

  • align more closely with the Home Office Crime Recording Rules (HOCR) and CSEW offence coding manual

  • prioritise the more serious crimes, according to an updated prioritisation order (see Appendix 2: Priority order for more information)

The approach to series crime has been modified (see section 3.4)

The existing CSEW defines a series as two or more incidents of the same offence type that a respondent considers "similar". The redesigned module defines a series as two or three incidents of the same offence type that the respondent considers "related"; the most recent incident will be asked about in the victimisation module (as per the current CSEW). Four or more incidents of the same offence type are automatically treated as a series, with the most recent incident being taken through to a victim form (within the victimisation module), to reduce respondent burden.

Question wording and ordering has been simplified (see section 4)

This change aims to improve the alignment of the questions with respondents' mental models, and to compensate for the absence of interviewer assistance in respondents' comprehension and response processes.

Capture of "attempted" crimes has been improved (see section 4.3)

When relevant, more screeners will ask about attempted crimes than in the existing CSEW. A new method has been introduced to prevent double counting of a single incident as both attempted and actual. This aims to improve data quality by capturing attempted crime more completely and accurately.

Fraud questions have been revised (see section 4.7)

The redesigned questions no longer ask whether a fraud occurred following a traditional crime, because this is not a data requirement, and it will reduce burden. The questions should more methodically route out respondents who are not the Specific Intended Victim (SIV) prior to the victimisation module, reducing burden and "out of scope" codes.

Next steps

The redesigned screener module specification has been programmed for use in cogability (combined cognitive and usability) testing, which will be conducted to evaluate the design and feasibility of online collection. Depending on the results, further stages of development may follow to redesign the victimisation modules, optimise the screener and victimisation modules for interview modes, and conduct quantitative testing (see section 5).

Back to table of contents

2. Redesign of the Crime Survey for England and Wales screener module

2.1 Reasons for undertaking the redesign

The Crime Survey for England and Wales (CSEW) is a victimisation survey for people aged 16 years and over. The survey incorporates a multimodal, longitudinal panel design where respondents are interviewed every 12 months across several waves. It also collects data on perceptions of crime, and attitudinal data on the criminal justice system and experiences of the police.

At the request of the Office for National Statistics's (ONS's) Centre for Crime and Justice (CCJ), the Qualitative and Data Collection Methodology (QDCM) team in the Methodology and Quality Directorate have undertaken methodological research as part of the CSEW Transformation programme (outlined in section 2.4).

The QDCM team were asked to redesign and cognitively test the screener and victimisation modules. These modules collect data to produce estimates of the incidence and prevalence of crime in the last 12 months. Prevalence refers to the proportion of the population who are victims of one offence, once or more. Incidence refers to the number of incidents experienced per household or per adult.

The research aims to assess the feasibility of collecting data from screener and victimisation modules online, and to address issues known from previous CSEW research, regardless of collection mode.

The QDCM team identified the need to make improvements to the current screener and victimisation modules in our initial stage of Discovery research on the redesign of multi-mode questions methodology. In this, we explored issues associated with modal change from interviewer-led to online self-completion, for example, survey length, complexity, repetition, and the ambiguity of questions.

The research drew on previous findings by Verian (formerly Kantar Public), the current CSEW fieldwork contractor. More information can be found in Verian's Re-design of Crime Survey for England and Wales (CSEW) Core Questions for Online Collection (2018) and Research on Transforming the Crime - Work Package A (2022).

The work described in this report formed Part 3 of our Discovery phase. It includes our initial specification of the proposed redesigned screener questions for potential use in an online mode from Wave 2 onwards. This is the first step of a programme of design and testing, which might also include redesigning the victimisation module for online collection and optimising both modules for interviewer mode (see section 5.3). We aim to meet the gold standard of the Government Analysis Function's three levels of Respondent Centred Design (RCD).

2.2 Current screener and victimisation modules

The core parts of the CSEW are the screener and victimisation modules. The current (2024 to 2025) screener module includes 31 questions. Each question asks whether a respondent has experienced a particular incident in the last 12 months, which may result in a crime code. Screener questions are grouped into "traditional" crimes, and fraud and computer misuse. Traditional crimes are divided into three subgroups:

  • crimes against the household's vehicles

  • crimes against the household's residence

  • personal crimes (against the person or their property away from home)

The fraud and computer misuse questions were added in 2015, following the traditional crimes so they would not disrupt the time series.

Incidents identified in the screener module are followed up with a victimisation module (up to six victim forms per respondent). There are different victimisation modules for traditional and fraud incidents. These modules ask for further details, including when and where the incident occurred, who did it and what happened. Office-based coders assess whether the incident amounts to an offence, and assign the most appropriate offence code using a coding manual. Offence code classifications are largely aligned with the Home Office Crime Recording Rules for frontline officers and staff.

If a respondent has experienced more than one incident of a particular crime and considers any of them to be "similar", this is classed as a "series" (this is discussed further in section 3.4) and only the most recent incident is asked about in the victimisation module. Currently, both traditional and fraud screeners are asked prior to the questions to identify series crime.

2.3 Previous changes to the Crime Survey for England and Wales and public consultation on Transformation

The content and structure of the CSEW remained broadly consistent since its introduction in 1982. However, in response to the coronavirus (COVID-19) pandemic, a simplified version of the CSEW (the Telephone Crime Survey for England and Wales, or TCSEW) was designed for telephone mode to replace the face-to-face survey in May 2020.

Survey questions that produced data to directly determine an incident code were retained, peripheral questions removed, and other survey and sample design adjustments made.

CCJ's Comparison of data from the two surveys methodology showed that there were no statistically significant differences in estimates for most headline crime types (although changes to the question on threats resulted in differences; this is covered in section 4.6), and that the survey estimates were broadly comparable.

In May 2022, CCJ conducted a public consultation on the redesign of the Crime Survey in England and Wales. CCJ sought responses from stakeholders on proposals to:

  • implement a longitudinal panel design

  • develop a multi-modal survey

  • improve screener questions

  • review CSEW offence coding

The consultation emphasised that any changes to survey questions and content could disrupt the time series of the data.

Respondents to the consultation acknowledged the potential benefits of improving screener questions, and implementing a multi-modal instrument and panel design, on data quality and sample representativeness. Although concerns were raised about mode effects, data comparability and capturing complex crimes (multiple victimisation, including repeat incidents or multi-feature incidents), increased data quality was valued over comparability. Stakeholders understood that improving the screener questions would increase the accuracy of crime estimates and they welcomed the harmonisation of CSEW offence classification with the Home Office's Crime Recording Rules (HOCR).

In October 2022, following the consultation, the CSEW changed from a cross-sectional single mode survey to a longitudinal panel survey with a multi-modal design. It retained the face-to-face interview in the respondent's home at Wave 1, and the Wave 2 survey taking place by telephone. The online survey mode is being developed for implementation from Wave 2 onwards, with the intention that the redesigned online screener and victimisation modules are optimised for face-to-face and telephone modes.

With the implementation of Wave 2, CCJ want to combine these data with Wave 1 data to improve the granularity and reliability of the main crime estimates. Separate to this research, Wave 1 and Wave 2 data are currently being compared as part of the transformation programme, including the consideration of any potential mode effects on the data.

2.4 Research and other sources informing redesign

The redesign was informed by our Discovery Part 1 and Part 2 research and other primary and secondary information sources (following a User-centred design approach to surveys).

Discovery Part 1

In Discovery Part 1, we considered the current CSEW design, reviewed previous research into CSEW data collection, and conducted primary research with CSEW interviewers and coders. This phase identified methodological issues and question-related topics for further research, including limitations in the existing CSEW and Verian's online test survey. Discovery Part 1 concluded that improvements to the questions were needed (regardless of mode) and that an online mode could work for respondents with "simple" crime profiles (no or very simple crime experiences), but that it may be more difficult to design effectively for those with "complex" profiles.

Various methodological issues and question-related topics were identified for further research. Despite these challenges, we decided to continue investigating the feasibility of an online mode for all respondents, to potentially realise benefits of an online mode relating to:

  • survey costs

  • improved response

  • accessibility

  • the flexibility to introduce new questions, if required

Following on from Discovery Part 1, Discovery Part 2 included several research phases.

Discovery Part 2: mental models

Discovery Part 2 included a mental models approach (detailed research aims, methodology and findings will be published at a later date). Mental models research is a core activity of the discovery phase in Respondent Centred Design (RCD) and refers to "how people conceptualise topics and terms and process them" (Office for National Statistics, 2023).

In the context of our research, understanding participants' mental models meant examining how they conceptualised and recalled their experiences of crime, how they made sense of specific concepts such as "similar" and "separate" incidents, and the terminology they used.

We conducted 28 in-depth semi-structured interviews with a purposive (non-probability) sample, including people with and without experiences of crime. Interviews were attended by a lead interviewer and an observer who wrote detailed notes, including notes on body language and communication.

After the interviews, a rapid qualitative analysis (as described in the Social Research Association's blog post) was conducted using the observation notes. Further analysis was conducted later using recordings and transcripts of the interviews.

Some main findings from the mental models research were that participants:

  • varied in their mental models - for example, how they thought about, defined and articulated their experiences of crime

  • varied in how they recalled their experiences - for example, in forward or reverse chronological order, or by seriousness or impact

  • could recall details of their experiences, such as those that are asked about in the victimisation module

  • were mixed in their (hypothetical) opinions about survey mode, but we did not identify any significant issues with completing an online crime survey

We will outline specific findings that directly informed the redesign of the screener module.

Discovery Part 2: identifying priority data requirements from current Crime Survey for England and Wales

In preparation for the redesign, priority data requirements were identified through a review of current CSEW victimisation module questions, variables and associated outputs. The main variables and derived variables, essential to the coding of crimes and the production of CSEW data outputs, were prioritised. The variables relating to the broader circumstances of the incident, and not essential to determine offence classification, were deemed secondary outputs.

Discovery Part 2: designing with data

Four datasets, along with information drawn from the CSEW and Telephone Crime Survey in England and Wales (TCSEW), were used to inform our redesign. Using data to design surveys aligns with the gold standard of the Government Analysis Function's Respondent Centred Design Framework.

Frequencies and missingness

We used a descriptive dataset from the TCSEW (April 2021 to March 2022) and CSEW (October 2021 to March 2022) to provide insights about how respondents answer questions in the screener and victimisation modules. This contained data on frequencies of all response options, including "don't know" and refusals to answer, as well as information about how many respondents are routed to each question.

Data showed which screeners had the highest reported frequencies and informed how we redesigned the "how many times" question and answer options (see section 3.2). Insights from this dataset will also be valuable for the redesign of the victimisation module, for example, to understand whether existing response options are relevant and necessary.

Multiple victimisation and components of incidents

This dataset was drawn from the CSEW 2019 to 2020 and provided information relating to multiple victimisation and components of different crime types. Specifically, it provided:

  • an overview of the frequency and percentage of people that experienced no, one, or more than one crime in the last 12 months; this provided contextual insight into the proportion of the sample experiencing complex crime

  • counts and percentages of people who answered yes to any two crime screeners; we identified no discernible patterns to assist screener question design

  • counts and percentages of people who were assigned any two crime codes, for example, 4.5% of people assigned a crime code for bicycle theft were also assigned a crime code for threats; these data (along with the crime code data) were used to design user journey profiles (see section 2.4)

The dataset also provided counts and percentages of incidents grouped into their final crime code that feature a certain detail (such as injury or the use of force), based on answers in the victimisation module. Insight into the complexity of differentiating between single incidents that involve multiple components and incidents that involve more than one crime (a multi-feature incident, MFI) helped us word our questions to identify MFIs.

Out-of-scope fraud

Open-text incident descriptions captured in the victimisation module were reviewed where respondents answered "yes” to a fraud screener but the incident was not coded as a crime (for example, if it was out of the survey’s scope).

Fraud cases are most often out of scope because the respondent is not the Specific Intended Victim (SIV) (see section 4.7). Insight from this review informed our redesign of fraud screener questions, which now aim to route-out non-SIVs prior to the victimisation module.

Open text responses for sexual assault cases

We also reviewed open-text descriptions of incidents where the respondent had answered "yes” to a screener and indicated that there was a “sexual element” to the incident in the victimisation module, or not provided an answer.

These responses helped us understand the range of incidents captured by the sexual assault screener question and informed its redesign (see section 4.6).

Discovery Part 2: user journeys

We also used  "user journeys" methodology to help inform our redesign. We input various crime scenarios through the current CSEW screener and victimisation modules. This included:

  • hypothetical scenarios and scenarios that were informed by "real life" cases from our mental models research and CSEW data

  • a range of "simple" and more "complex" cases

  • traditional crimes and fraud, both actual and attempted

The results helped us understand potential experiences of respondents completing the survey, including:

  • level of ease of answering questions

  • potential problems arising from question wording and inconsistent interpretation

  • potential causes of response error

  • how respondents respond to self-completion survey questions, without interviewer assistance

Findings from user journeys improved our understanding of issues with the screener and victimisation module questions identified in Discovery Part 1. For example, overlaps exist between fraud screeners that may confuse the respondent. This informed the redesign of fraud screeners (see section 4.7). We also used similar scenarios to check for any obvious errors with our redesigned screeners.

Discovery Part 2: interview recordings

Observing face-to-face interviews and listening to complex crime interview recordings gave insight into how the existing CSEW works in practice. This highlighted issues, including:

  • the survey was too long, even for respondents who had not experienced crime

  • questions were repetitive and sometimes irrelevant

  • there were overly long lists of answer options or missing options

  • answer options were not always applicable to respondents based on their previous answers

2.5 Redesign process and principles

Based on our learning from Discovery Parts 1 and 2, when redesigning the screener module structure and questions, we aimed to design a new approach, rather than making only small changes to approaches that had been tried previously, but that had not fully resolved the problem.

In relation to the general structure and flow of the screener module, we designed options to address the Principal Crime Rule, which aims to capture the most serious crime within an MFI (as outlined by the Home Office's Crime Recording Rules, 2024), and repeat incidence (identifying series and separate incidents of the same screener or MFI). We considered approaches tested by the United States National Crime and Victimisation Survey (NCVS), such as interleaved and non-interleaved designs, and whether they could be feasible for the CSEW.

We developed ideas iteratively through a series of workshops, using interactive whiteboards and flowcharts to visualise potential designs. We then tested a variety of crime experience scenarios through the questions (which were conceptual at this stage), and routing, to identify their pros and cons. We anticipated potential issues that might arise for a respondent and looked at how designs would affect offence coding and estimation of prevalence or incidence.

To address potential issues, we adjusted the designs, retested the scenarios, and repeated as necessary. Sometimes, different ideas were developed in parallel and compared. Each iteration added more detail and complexity. In total, thirteen options were explored (see Appendix 1a: Redesign options for new approaches to multi-feature incidents and repeat incidence) and pursued or rejected throughout the process. Our final specification for the module prototype is based on Option 12 (see Appendix 1b: Flowchart with example of Option 12 and Section 3: Changes to module structure and the main concepts).

As promising options emerged, we considered specific question wordings, response options, answer fields and refined the often-complex routing between them. This applied to the screener questions that identify which offences the respondent has experienced, and the further questions that enable counting of incidents and the generation of victim forms.

We conducted a thorough review of each question wording in the screener module. Where necessary, we sought clarification about offence coding processes and the production of the estimates and discussed potential changes with the CCJ.

The redesigned screener questions that identify whether respondents have experienced a specific crime type broadly cover the same content as the current set, with some differences in wording and ordering.

Throughout this process, we carefully considered the impact of our specification on the scope and quality of data collected. Amendments aim to improve the screener module and avoid any unintended negative impacts, such as a loss of detail required for offence coding. Through trial and error, we worked to create an optimal design and specification for cognitive testing (specifically "cogability" testing, which combines both cognitive and usability aspects).

These new design features are described in detail in Section 3: Changes to module structure and the main concepts and Section 4: Changes to screener questions.

Online question design principles

We focused on designing for online, although kept in mind the need to improve the structure and questions for use in other modes. In particular, we designed for small-screened devices because the survey needs to work effectively on smartphones (see section 5.3).

We aimed to reduce the length of question stems, minimise the number of answer options, include only one question per page and minimise written guidance. This was to avoid overcrowding the screen and reduce the need to scroll down to view the question or additional questions in full, which could otherwise be overlooked. We aimed to comply with accessibility requirements and general ONS online survey programming standards.

The research aimed to follow Respondent Centred Design principles to provide a positive experience for respondents, while still meeting the complex data requirements. We discuss optimisation of the online designs across interviewer modes and all waves, in a later stage of the transformation, in Section 5: Further research and development work.

2.6 Glossary

Algorithm pot

The set of incidents experienced by a respondent, from which the prioritisation algorithm selects which are to be followed up by a victim form (if there are more than six), and in which order.

Fraud and computer misuse

Offences used in the Crime Survey for England and Wales (CSEW), including use of personal information, being deceived out of money or goods, and interference with computers or other internet-enabled devices.

Higher priority crime and lower priority crime

Offences within a multi-feature incident (MFI) are ranked as the higher priority crime (HPC) or lower priority crime (LPC), determined using a priority system based on the Principal Crime Rule.

Home Office Crime Recording Rules for frontline officers and staff

The Home Office Crime Recording Rules (HOCR) provide a national standard for the recording and counting of offences recorded by police forces in England and Wales (known as "recorded crime").

Multi-feature incident

A multi-feature incident (MFI) is an incident that includes more than one crime, for example, a snatch theft may also involve a threat. The CSEW will give an offence code to one offence per incident, using a priority system based on the Principal Crime Rule.

Multi-feature series

A series where all the incidents comprise more than one offence type (an MFI), in the same combination.

Multiple victimisation

Being the victim of more than one crime, either of the same or different crime types.

Offence code

Each crime reported by respondents is assigned an offence code, based on information in the victim form. The codes are designed to closely match the code assigned by the police, had it been reported. Some reported incidents will not amount to an offence or will be otherwise invalid or out of scope of the survey, so will not be assigned a substantive code.

Prevalence and incidence

Prevalence refers to the proportion of the population who are victims of an offence one or more times. Incidence refers to the number of incidents experienced per household or per adult.

Prioritisation algorithm

The computational process within the questionnaire that selects which incidents recorded at screener questions are to be followed up with a victim form (if there are more than six), and in which order.

Repeat victimisation

A subset of multiple victimisation, defined as being a victim of the same offence two or more times (classified as either a "series" of similar incidents or as separate incidents).

Separate incident

An incident (whether MFI or non-MFI) that is not part of a series.

Series crime

In the redesign, for an incidence rate of two or three, the respondent is asked if they were related. An incidence rate of four or more is treated as a series automatically. This differs slightly to the current design but in most cases, a series should be recorded in a similar way, with negligible impact on incidence rates and secondary outputs.

Specific Intended Victim

A respondent must be the Specific Intended Victim (SIV) for a fraud code to apply. They must have responded to initial communication, taken some action in a way that the perpetrator intended (for example, clicking on a link in an email, or ringing a given number), or they were never contacted but were still the intended target.

Traditional crime

Offence types used in the CSEW, comprising three subgroups:

  • crimes against the household's vehicles

  • crimes against the household's residence

  • personal crimes (against the person or their property away from home)

Back to table of contents

3. Changes to module structure and the main concepts

3.1 Approach to multi-feature incidents and offence coding

A multi-feature incident (MFI) involves more than one offence type happening at the same time. This could be a single offence such as robbery, comprising assault and theft, or two discrete offences, only one of which would be coded in line with the Home Office Crime Recording Rules (HOCR) Principal Crime Rule.

The existing screeners ask respondents to only report one part of an MFI, at the first applicable screener, by including the phrase "apart from anything you have already mentioned" at the subsequent screeners. Therefore, respondents who have experienced MFIs cannot report all the offences they have experienced in the screener module; some may only be identified in the "incident checklist" questions in the victimisation module.

For example, if a respondent has experienced an incident involving a snatch theft and a threat, they should report this incident at the snatch theft screener only (as this is the first screener they are presented with), with the threat recorded in the victimisation module. In some cases, the latter offence may be more significant to the respondent, or of a higher offence coding priority.

MFIs and "double counting"

In the current Crime Survey for England and Wales (CSEW), reporting of the same incident at two separate screeners is known as "double counting" and is incorrect. Discovery Part 1 identified that interviewers use strategies to avoid this, for example, probing for full details of the crime when first mentioned and only recording at one screener. Despite this, interviewers often need to return through the survey to correct errors with double counting and avoid generating more than one victim form for the same incident.

Verian's (2018) development and test of an online version of the survey incorporated a series of check questions to detect and correct instances of double counting. This approach was regarded too cognitively challenging, and respondents' attempts to correct errors sometimes resulted in further errors. This shows the challenge of avoiding double counting in an online survey.

To avoid the need to prevent double counting, the redesigned approach allows respondents to report all offences experienced in the screener module and does not ask respondents to exclude anything that was part of an incident they have already mentioned.

Upon completion of the traditional screener questions, if more than one screener is answered "yes", questions are asked to determine which, if any, of the reported offences occurred in the same incident. The first question asked is:

Did any of these incidents take place at the same time?

If a respondent answers "yes", they are then asked a set of follow-up questions. For example, if a respondent had answered "yes" to more than two crime types, where the higher priority crime (HPC, according to the prioritisation described in section 3.3) was experienced once, they would be asked:

Did the time [HPC] happen at the same time as any of the following incidents you've told us about?

Respondents would then be able to select incidents from the list of other screeners they had answered "yes" to, or, if no incidents were MFIs, they would answer "no". Rather than repeating the full screener question, shorthand versions would be used in the list, for example, "Someone got into your home without permission". Shorthand versions of screener questions are also used in the Step 2 "What happened" questions (see section 4.3). 

The number of MFI questions asked depends on the number of screeners that have been selected and answers to the "how many times" questions (see section 3.2). Running totals of the screener types and the number of times experienced are calculated and adjusted in the background of the program. This results in a derived list and count of both MFI and non-MFI incidents.

This information is periodically shown to respondents and is available at questions where the offence types are not displayed in the question stem by clicking on a "twisty" (drop-down display lists). This reminds respondents of the scope of each question, as well as what they have reported, how incidents are being accounted for, and the counts being adjusted.

For incidents of fraud, because of the complexity of prioritising cases with and without monetary loss (see section 4.7), MFI information is not collected prior to the victimisation module. For this reason, the fraud section is now asked after questions on traditional crime, including those about series crime (see section 3.4).

MFIs and prioritisation of victim forms

The new approach improves how victim forms are prioritised. In the current CSEW, a maximum of six victim forms can be generated for each respondent. Where a respondent has experienced more than six incidents, a prioritisation algorithm selects which incidents should be followed up with a victim form, based on prioritisation criteria (see section 3.3). When there are between two and six incidents identified, the algorithm prioritises the order that victim forms are asked.

The advantages of the new approach are that:

  • we have more information about the crime types involved in each incident before the algorithm allocates victim forms, making the selection more accurate

  • we can be sure an MFI will not generate more than one victim form and the highest priority offence in an MFI can be taken through to the victimisation prioritisation algorithm pot

  • it does not seek to avoid double counting, which may be difficult in an online survey

  • by identifying MFIs and removing lower priority crimes from the algorithm pot, victim forms can be "freed up" for other offences, so we obtain a well-rounded understanding of crime (this is only relevant if the algorithm pot is greater than six - see section 2.2) 

  • MFIs can be identified prior to the victimisation module, so the number and repetitiveness of questions asked is reduced

  • both single offence codes, such as robbery, and two discrete offences occurring in one incident can be handled

3.2 Questions identifying repeat incidence

Currently, if a screener has been answered "yes", a question is asked to establish how many times that crime type has been experienced in the reference period. A numerical field allows for answers from 1 to 95, more than 95, or "too many to remember". The CSEW relies on information about the number of times a crime type has occurred to produce an incidence rate.

In 2022, the Qualitative and Data Collection Methodology (QDCM) team conducted cognitive testing of revisions to the Abuse During Childhood (ADC) module in the CSEW. This work provided insight into a similar repeat incidence question.

We found that asking participants how many times a screener (for example, emotional or physical abuse) had happened to them could be:

  • triggering, because of the need to recall detail of sensitive and traumatic events

  • burdensome

  • difficult to provide accurate answers for without estimating

Because domestic abuse is not limited to threats, violence and sexual assault (see section 4.6), these insights potentially apply to all CSEW screeners.

For this reason, we considered changing the "how many times" CSEW question from asking for a precise number to asking for a banded number to reduce respondent burden and potential triggering. For example, giving the option of 1 to 2 times, 3 to 5 times, 6 to 10 and so on, with bandwidths gradually increasing (as discussed in the Measurement of Domestic Abuse - Redeveloping the Crime Survey for England and Wales, Hester and others, 2023 article)  in relation to questions on domestic abuse). We reviewed frequency data (see section 2.4) to consider what bands might be appropriate and how these might differ across crime types.

However, our ADC research found that banded options were problematic because respondents did not always feel they reflected their experiences accurately, even when provided with a band that included the number of incidents they had experienced. For example, if the banded options provided a maximum of "20 times or more", participants queried the absence of a higher band. However, providing a higher banded maximum may also be problematic if, for example, someone selects a lower band and feels this may be perceived as less serious compared with a higher band.

Although focused on finances rather than crime, QDCM's (2023) Household Financial Survey Transformation (HFST) research also found that banded options were not effective. Respondents were unsure which option to select if their estimate was close to the boundary between two bands, and they did not want to provide inaccurate information.

QDCM's ADC testing, and Hester and others (2023) also discuss frequency questions, for example, asking respondents if they suffered abuse "once a week" or "more than once a week". However, as the redesigned approach identifies MFIs prior to the victimisation module (and therefore, net numbers of MFIs and non-MFI incidents), a precise answer to the "how many times" question is preferable to determine subsequent questions.

Therefore, the redesigned screener module continues to ask how many times a respondent experienced a particular crime in the last 12 months as an open question:

Since 1st [month, year], how many times has someone [...]?

1. [Open textbox - 3 digits, numerical only]

OR

2. Don't know

The main difference to the existing CSEW is that the "don't know" option will be presented upfront, rather than only being recorded if said spontaneously (when interviewer-led) or being hidden until the respondent tries to skip the question (online mode) (see section 3.5). "Don't know" is offered upfront to provide an option to respondents if they are unsure of an exact figure. However, if the respondent selects "don't know", they are then asked to provide their best estimate at a new question:

What is your best estimate of how many times this happened?

1. [Open textbox - 3 digits, numerical only]

OR

2. Don't know

Including a "don't know" option at this "best estimate" question was considered at length. By not presenting a "don't know" option, we risk respondents entering an inaccurate answer to progress. However, if we offer a "don't know" option, an accurate incidence rate cannot be recorded in the screener module and MFIs cannot be identified in the proposed way.

We decided that it was more important to avoid what might be (potentially very) inaccurate numbers, which would create problems with later questions. Therefore, we will offer a "don't know" option at this question.

When selected, we assume an incidence rate of 1, as respondents would likely have selected "no" at the screener if they had not experienced the crime. We considered assuming an incidence rate of 2, as if the respondent is unsure, this may be because it happened at least more than once. Their uncertainty may also relate to when repeat incidents occurred, rather than how many times, for example, if both incidents occurred within the survey reference period, or one within and one prior to it. Assuming an incidence of 2 would generate two victim forms, and the respondent may struggle to complete both if one incident did not exist or occurred prior to the reference period.

We also considered asking respondents to give their best estimate through guidance included at the "how many times" questions. For example, "Please give your best estimate if you are not sure". However, this may risk satisficing, for example, respondents not fully considering the number of times they experienced a crime when, with thought, they could provide a more accurate answer. Introducing a separate question may reduce this risk.

To further reduce the chance of errors in reporting crime frequency, we have introduced checks at the beginning of the series questions. This means that respondents can correct errors if they identify inconsistencies in the counts shown.

3.3 Changes to the victimisation module prioritisation algorithm 

The current algorithm prioritises crime by the four main screener subgroups. Personal crime is the highest priority, followed by crimes against the household's residence, vehicles, then fraud and computer misuse.

Broadly speaking, the current algorithm prioritises the traditional crime subgroups in reverse of their order in the questionnaire. Personal crimes are prioritised despite being asked later than vehicle and residential crime; residential crime is prioritised over vehicle crime. Fraud and computer misuse are both asked last and have the lowest priority.

Where there are experiences of multiple separate incidents of the same screener, these are prioritised chronologically (the most recent first, then second most recent, and so on). The redesigned prioritisation algorithm still broadly reflects the severity of crime type, but the order has been refined to align more closely with offence coding priority order.

Comparison of the current and redesigned priority order can be found in Appendix 2: Priority order. This is mainly informed by the Home Office's Crime Recording Rules (HOCR) and CSEW's coding manual, a priority list of crime categories used to assign codes to incidents. The updated algorithm selects incidents to be assigned a victim form, starting with the highest priority crime (in the revised order), and then in order of priority and complexity:

  1. Most recent series crime (section 3.4)

  2. Most recent separate, or only, MFI

  3. Most recent separate, or only, incident (non-MFI)

This means that in rare complex scenarios, such as experience of more than one MFI with the same highest priority crime type, the new algorithm rules prioritise the "most recent of the most complex" incidents. Here, "complex" refers to the number of crimes involved, after prioritising by crime type. For example, an MFI involving three crime types would take priority over a more recent MFI involving two crime types.

Changes to the priority order and other aspects of the screener module mean that the algorithm can more accurately prioritise incidents because:

  • we can ensure victim forms are generated for higher priority crimes, including actual over attempted crime

  • more information, such as whether an incident is an MFI, is obtained prior to the victimisation module (see section 3.1)

  • the priority order more closely aligns with the HOCR and coding manual

  • we can identify whether an incident of fraud incurred monetary loss (see section 4.7)

  • out-of-scope fraud cases are excluded before victim forms are generated, meaning they will not be prioritised over in-scope cases (see section 4.7)

3.4 Changes to how series crime is defined and treated

Current CSEW questions 

The current CSEW treats repeat incidence of a crime as a series when the respondent reports that two or more of the incidents were "similar": 

You mentioned [X number] of incidents of [X]. Were any of these very similar incidents, where the same thing was done under the same circumstances and probably by the same people? 

If a respondent experiences a combination of single incidents and a series of the same crime, a victim form will be generated for the series and for each single incident (up to a maximum of six).

The most recent incident in the series will be asked about in the victim form, but all will be included in the incidence rates (subject to them occurring within the 12-month reference period). Incident dates and the series pattern (whether incidents occurred before, after or in between separate incidents) establish the order in which victim forms should be asked (or, if there are more than six forms, which will be excluded). 

Issues with existing series questions 

Verian (2022, page 19) and our (2023) research suggests that the existing series question is confusing and can be "cognitively challenging" for respondents. Interviewers and respondents may be unsure whether to class multiple incidents as "similar" if:

  • one incident is more serious than the rest, for example, repeated cases of criminal damage by a neighbour where one incident involves more substantial damage 

  • there are different perpetrators, but they are connected, such as family members

  • only two out of three conditions are met 

This is because classifying a series is "subjective" and depends on the respondents' interpretation of the question (Office for National Statistics, 2023). Findings from our mental models research (publication forthcoming) provide further evidence of potential respondent difficulty.

The questions that are asked to establish the series pattern also simplify how a mix of single incidents and a series might be experienced. If a separate incident was experienced both before and after a series, it would be assumed that the series occurred between the separate incidents. However, one or more separate incidents could occur during the series crime period. It is also assumed that only one series occurred when, potentially, two or more separate series of one crime might be experienced.

Alternative approaches considered 

To address these concerns, we explored alternative options for defining and measuring series crime (see Appendix 1a: Redesign options for new approaches to multi-feature incidents and repeat incidence). Although little detail was given, Verian's (2022) research suggested a "3 plus approach" to series crime could be explored. This would mean that an incidence of three or more of the same crime type would be treated as a series. If a respondent indicates two incidents, a "related" question, discussed later in this section, is asked.

Another option was to divide the existing "similar" series question into questions based on three "objective" and one subjective criterion. For example, if multiple incidents met at least two out of four of the following criteria, it would be considered a series:

  • same perpetrator 

  • same location 

  • same circumstances that would differ for every crime type, for example, the same method of entry for a burglary 

  • whether the respondent thought the incidents were related

However, this more "objective" approach might also be cognitively challenging for respondents, particularly those who have experienced more than one series of the same crime. It was also difficult to introduce relevant criteria for all crime types, for example:

  • asking about the location of a crime would not apply to home-based crimes

  • which circumstances define sensitive crimes, such as sexual assault, as a series

  • respondents may be unsure or not know the answer to one or more of these criteria (which also applies to the existing "similar" question)

Therefore, we have redefined what constitutes a "series" and how to identify and treat them in the CSEW for both traditional crime and fraud. These differ for incidence rates of two or three, and four or more for any crime.

Treatment of two or three incidents of a crime

Similar to the existing CSEW, the redesigned questions aim to identify if two or three incidents of the same crime were "related", for example: 

Do you think the 2 times [someone did the specified crime type] are related to each other?

Our mental models research showed that the word "related" was understood to mean at the "same time", of the "same nature" or by the "same person". The use of "similar" in the existing series question was therefore changed to "related".

This simplified wording enables the respondent to judge whether repeat incidents form a series without having to meet the criteria within the existing question. Broadening the scope of the question may potentially increase the number of repeat incidents being defined as a series, compared with the existing CSEW.

Treatment of four or more incidents of a crime

The redesigned approach applies a "4 plus rule" to repeat incidents. This means that if a respondent has experienced a crime with an incidence of four or more (after any MFIs have been identified and deducted from the crime count), it will be treated as a series. Only the most recent incident will go through to the algorithm pot to be assessed for victim form generation. 

This approach reduces respondent burden for those with complex crime profiles, removing the need to ask a potentially complicated set of questions to identify whether this repeat incidence is formed of only related or separate incidents, a combination of both, and their chronological sequence (the "series pattern").

The 4 plus rule means that repeat incidents are treated as a series when they might be separate. We considered asking a check question to confirm if an assumed series is an actual series when the algorithm pot has capacity. However, respondents would potentially need to complete four separate victim forms for four different incidents of the same offence type. This risks increasing respondent burden.

However, the 4 plus approach should cause minimal data loss because it would only apply to a small minority of complex cases. Any loss will mostly relate to the nature of crime (because of a reduced number of victim forms), rather than the incidence rate. This is because the total number of incidents that have occurred in a series contribute to incidence rates, even though only one victim form is asked.

Identifying MFIs prior to the victimisation module should reduce the likelihood that the series will include incidents that may be coded as a different offence (these are currently only identified in the victim form at the "incident checklist" questions).

However, if a respondent experienced all the crimes reported at screeners more than four times, they would not be asked any questions to identify MFIs. This is because they would be treated as a series (according to the 4 plus rule). When the victimisation module is redesigned, we will consider including a question at the beginning of each victim form to ask if the incident is related to any other incidents.

Why a 4 plus approach and not a 2 or 3 or 5 plus approach? 

Similar to a 4 plus approach, a 3 plus approach would reduce complexity and respondent burden in an online context as fewer series questions would be required. It would also ensure consistency in the definition as it does not depend on respondents' interpretation of a series matching a defined set of criteria.

We also considered a 2 plus approach. However, both a 2 and 3 plus approach could result in data loss as it is perhaps more likely that lower numbers of repeat incidents are separate and do not comprise a series. Therefore, we would be grouping potentially different incidents together, and too few victim forms would be generated. Both the 2 and 3 plus approaches were considered less suitable when more data could be obtained with relatively little impact on respondent burden.

When trialling our different design options, we found that a 4 plus approach is the optimal number to identify series crime; any higher (such as a 5 plus approach) would be too complicated for the redesigned questions. This is because we cannot easily identify where a series might lie within four or more incidents; there may be multiple different series, or one or more series occurring between single incidents.

Multi-feature series

We have introduced the concept of a multi-feature series (MF series), an MFI comprising the same combination of crimes occurring on more than one occasion, if the respondent identifies they are related. For example, two MFIs both made up of a sexual assault and assault would be one MF series. If the respondent says they are not related, they would be treated as separate MFI incidents.

Similar to a non-MFI series, an MF series would be treated differently if the number of repeat incidents was two to three, or four or more. This ensures consistency in how series crime is treated across both incidents involving only one crime type and MFIs, which aims to align with HOCR, specifically the finished incident and Principal Crime Rules. It also aims to reduce respondent burden by ensuring that respondents who have experienced multiple similar MFIs only need to complete one victim form.

3.5 Other overarching changes

Reference date and recall period

Longitudinal considerations

Currently, the CSEW asks for respondents to recall their experiences of crime since the first day of the month of interview, 12 months previously. For example, if an interview was conducted on 25 May 2024, respondents would be asked to recall incidents that have happened since 1 May 2023. If any incidents reported occurred between 1 May 2024 and 25 May 2024, the respondent would be asked to complete a victim form (subject to incident prioritisation). However, these incidents would be coded as out of scope as they fall outside the 12-month period and are not included in that year's estimates.

Changing the CSEW to a longitudinal design adds complexity. Currently, for the CSEW at Wave 2 (conducted via telephone), an interview is held as soon as possible once 12 months has passed since Wave 1. Using the above example, ideally the Wave 2 interview would be conducted in May 2025.

However, the dates used in the Wave 2 question stems will always align with the month in which the interview took place. If the respondent was not interviewed again until 21 June 2025 (because of to the extension of the field period of the sample issued in May), they would be asked to recall their experiences of crime since 1 June 2024, rather than since 1 May. Therefore, there could be a gap of a month (May 2025) between the two 12-month reference periods across two adjacent waves. See Appendix 3: Example visualisation of reference period for an example scenario.

During the question redesign, we considered retaining out-of-period victim forms for inclusion in the following wave's estimates, or use them as fed-forward data, by displaying incidents from Wave 1 in the Wave 2 questionnaire. This would be used to check if an incident reported at Wave 2 was the same as an incident already reported at Wave 1, if the dates matched. Alternatively, it could be used to remind the respondent that they did not need to report the same incident again.

The benefit of this would be that collected data would no longer be discarded but would be used in the next year's estimates. This would:

  • avoid collecting duplicated data

  • be more efficient

  • avoid unnecessary burden on respondents

However, potential risks include:

  • concerns around ethics and confidentiality, such as someone who is not the respondent using the respondent's access code to enter the questionnaire and seeing what had been reported at the previous wave

  • reminding respondents of incidents they reported a year ago may be triggering

If fed-forward data were not used, respondents may forget that they reported an incident at Wave 1 and report it again at Wave 2. This could cause incidents to be double counted across waves, unless overlaps became clear during offence coding (for example, if the incidents had the same dates).

To avoid the risk of double counting across waves, rather than trying to collect Wave 2 data as close to 12 months after Wave 1 as possible, it has been provisionally agreed with the Office for National Statistics's (ONS's) Centre for Crime and Justice (CCJ) that respondents will be recontacted at least 13 months after their interview. This would also be the case for any subsequent waves.

Although this means that the data collection periods of Waves 1 and 2 will not be consistent (in the calendar months they cover) or contiguous (there would be a gap between them), it guarantees there will be no overlap in the reference period. This should prevent respondents double counting experiences they reported at Wave 1, provided they do not make recall errors such as forward telescoping (including an incident that took place before the reference date).

This change would not prevent out-of-scope victim forms for incidents in the month of data collection (after the end of the 12-month data output period). If a respondent has more than six incidents in their algorithm pot and experienced a higher priority crime in the month of survey completion than any occurring within the 12-month output period, the victim form would be asked but discarded; a lower priority incident would not be taken through to a victim form. However, this also applies in the current CSEW.

Another consideration relating to longitudinal collection is that a series of incidents of the same type could span across two or more waves, for example:

  • a single incident in each wave

  • a single incident in one wave and multiple incidents in the next (or the other way around)

  • multiple incidents in each wave

We would need to ask a potentially complex set of questions to identify each situation. It would involve cross-referencing answers over the two waves, being subject to the same ethical considerations applicable to fed-forward data. Therefore, we propose that series crimes are treated independently at each wave, so that only incidents that occur during the recall period of each wave are asked about and reported.

Timeline tool

In the current survey, a respondent may be given a paper calendar to assist recall of incidents and when they occurred (for example, to reduce recall errors such as forward telescoping).

In Discovery Part 1, we found that Verian interviewers did not find the Life Events calendar useful. We considered whether an online equivalent could be provided. However, Verian (2018) found that this "did not aid recall and... caused additional confusion as respondents thought they needed to interact with the image". This presents a challenge in designing for a small-screen device (see section 5.3).

Further consideration may be given to developing a tool to prompt recall of incident dates if cogability (combined cognitive and usability) testing indicates a respondent need for it. Instead, we reiterate the reference period ("Since 1st [month, year] ...") consistently in every screener question.

As suggested by Verian (2018), we considered introducing an "earlier than the reference period" answer to questions asking for the date of an incident. For example, if the reference period began in May 2023, an answer option could be "Before May 2023". This would mean respondents could include incidents that happened outside of the reference period, but we could rule them out of scope prior to victim form generation.

The United States National Crime and Victimisation Survey (NCVS) redesign field test (2022, page 88) also suggested asking "if the incident occurred before, during, or after the first month of the reference period, then to ask for the specific month". For example, we could ask if the incident occurred "Before May 2023", "In May 2023", or "After May 2023". If they selected the latter, we could ask which month the incident occurred. They suggested this would help respondents recall whether incidents occurred inside or outside the reference period.

We decided these suggestions could put unnecessary burden on respondents. As such, we continue to ask respondents the date of the incident, but if they answer "don't know", we have introduced an additional question to ask for their best estimate of the month the incident occurred. This will still allow us to prioritise incidents by date, if more than six victim forms have been generated (see section 3.1).

Approach to "don't know" and "prefer not to say" options 

Currently, the CSEW allows "don't know" and "prefer not to say" answers to be recorded by interviewers if they are spontaneously given by a respondent in the screener and victimisation modules. Generally, response options for these answers are not presented to respondents upfront.

For online mode, we considered an optimal approach to "don't know" and "prefer not to say" answers, trying to provide a balance between:

  • allowing genuine "don't know" responses

  • the ethics of allowing refusal or not (including to potentially sensitive questions)

  • minimising any effects on routing, derivations and data quality caused by missing answers

  • avoiding satisficing and trying to ensure respondents do not feel pressured to report an incorrect answer option in the absence of a "don't know" option we want to avoid respondents completing victim forms unnecessarily and subsequently misleading incidence rates if they inaccurately answer "yes" or "no"

As per the interviewer-led modes, the "don't know" and "prefer not to say" options will not be offered upfront. Instead, if the respondent attempts to skip the question, they will be presented with added options. Exceptions to this rule apply, for example, a "don't know" is presented upfront at the "how many times" and "best estimate" questions.

This method aims for equivalence between survey modes to reduce mode effects. However, it will need to be tested to understand the potential for false positive or false negative responses. Unless otherwise specified, a "don't know" and a "prefer not to say" response is treated as a "no" or nil response for routing purposes, as in the existing survey.

Back to table of contents

4. Changes to screener questions

4.1 Changes to introductions, guidance and instructions

In the existing face-to-face Crime Survey for England and Wales (CSEW), guidance is available at relevant points for interviewers to add context or support respondents, when needed. This includes guidance for specific questions, to introduce sections, and general guidance.

For the redesigned module, we have revised and added to written guidance for respondents to suit the online mode and the changes to the screener questions. A comparison of the written guidance included in the current and redesigned screener module is summarised in Appendix 5: Current and redesigned preambles. The main changes we have made are:

  • adding instructions at the beginning of every screener section to include incidents committed by people known and unknown to respondents - this is to prompt inclusion of all experiences, including domestic abuse and hate crime, which is defined by the Crown Prosecution Service (CPS) as "hostility or prejudice, based on a person's disability, race, religion, sexual orientation or transgender identity" (see section 4.6)

  • introducing guidance about which home(s) to include at the home-based questions, if they have told us they have moved in the last 12 months; the previous residence screener questions were removed during the redesign because of repetition

  • adding additional guidance screener questions about physical abuse, sexual abuse and threats, to prepare respondents for the sensitivity of the questions, to encourage them to be in a private place, and reiterate that they can skip a question or take a break from the survey and return later, if required

"You can select more than one answer" wording

We use the wording "You can select more than one answer if these happened in separate incidents" at multiple choice questions throughout the survey. This is to highlight that the options provided are different to each other, compared with "Select all that apply".

The wording also aims to encourage respondents to only report an attempted crime if it happened in a separate incident to an actual crime (this is discussed further in section 4.3). This is because of the risk of double counting an attempted and an actual crime as the same incident.

Although we can capture such instances at the multi-feature incident (MFI) questions by asking if the actual and attempted crime happened at the same time (see section 3.1), respondents will not see these questions if they answer the "best estimate" questions (see section 3.2).

Text substitutions

Text substitutions are used throughout the redesigned screener module to tailor wording based on respondents' previous answers. The program will only show guidance and answer options that are relevant. For example, in the crimes against the household's residence section, the following text substitution is used:

In this section, please do not include any crimes that have happened at any other home, [text substitution, if moved home in last 12 months:] but do include any crimes you experienced at your previous address.

4.2 Order of screener questions

We have redesigned the screener questions to improve the order and wording for an online mode to account for there being no interviewer assistance. Our mental models research showed a tendency for people to discuss more salient or impactful crimes first. Therefore, we have changed the order of crimes against the household's vehicles and residence screeners, with questions about household's residence now asked first to better align with mental models.

The position of personal crime screeners (including sexual and physical assault) remains unchanged because of the potential sensitivity of these questions being asked first. Asking these questions later in the module aims to reduce the risk of break-off. This also reduces potential context effects of asking about theft away from the home before asking about home and vehicle-based theft.

Questions on fraud and computer misuse are still asked last but are now treated independently of traditional crimes (see section 4.7).

4.3 Changes to coverage of attempted crime in screeners

In the coding manual, certain offences have separate codes for "actual" and "attempted" crime. However, in the current CSEW, some of the "attempted" offences are not asked about in a screener question, meaning they are unlikely to be recorded unless they are reported incidentally at another screener or identified in the victimisation module and offence coding process. For example, data provided in screeners on use of violence or threat may amount to an "attempted" assault.

To align with offence coding and increase consistency for respondents, the redesigned questions include an "attempted" option for all screeners where there is a relevant code:

  • theft from inside home

  • theft from outside home

  • theft of motor vehicle

  • theft from motor vehicle

  • theft of bicycle

  • other theft

  • sexual assault

  • physical assault

An option for attempted criminal damage is not included because of the scarcity of this offence. Respondents would need to interrupt an unsuccessful attempt of criminal damage; if there was damage found later, this would be "actual" criminal damage.

In their online testing, Verian (2022) added several screeners about attempted crime. They typically used a paired approach, which asks about actual and attempted incidents of the same offence type on the same screen (in a grid design) and with forced choice answers, for example:

Since 1st [month, year], have any of the following happened?

Someone stole your bicycle
Yes
No

Someone tried to steal your bicycle but didn't succeed
Yes
No

They found that respondents sometimes answered "yes" to both parts to reflect the same incident (an attempt to do something had succeeded). This is perhaps more likely to occur in a self-completion mode when there is no interviewer to check double counting (see section 5.3). Verian went on to suggest testing actual and attempted crime in a single "yes or no" screener question, and to determine a crime as actual or attempted in the victimisation module.

To mitigate the risk of double counting, we have built on this by introducing two-step screeners. Step 1 asks about actual and attempted crimes in a single "yes or no" question. If "yes", Step 2 asks whether they had experienced an "actual" incident, an "attempt", or both (in separate incidents). The two questions are on separate screens to display more clearly on small-screened devices.

The new approach means that more "attempted" offences can be identified within the screener module and included in the "pot" of incidents from which victimisation forms are selected. For respondents who have experienced more than six incidents, "actual" incidents can be prioritised over attempts. 

Here is an example of combined attempted and actual screener question wording:

Since 1st [month, year], has anyone got into, or tried to get into, your home WITHOUT permission? 

1. Yes

2. No

(Display if "skipped")

Or

3. Don't know

4. Prefer not to say

(Ask if above is "yes")

What happened?

You can select more than one answer if these happened in separate incidents

1. Someone got into my home without permission

2. Someone tried but failed to get into my home without permission

The option(s) selected at the Step 2 "What happened" questions identifies the experiences that need to be taken forward for further questioning.

Using the phrase "tried to" to capture attempted crimes was informed by participants describing their experiences in our mental models research.

The "tried but failed" wording was added earlier in the sentence compared with Verian's (2022) wording, "Someone tried to get into your home without permission but didn't succeed", to:

  • more clearly differentiate between the two response options

  • reduce any potential issues with respondents speed reading and missing the "but failed" qualifier

Collecting more incidents of attempted crime will improve the accuracy of prevalence and incidence measures.

4.4 Changes to crimes against the household's residence screeners

Change to how crimes against a previous residence are collected

As the CSEW follows respondents rather than addresses, the household screeners have been changed for respondents who have changed address in the last 12 months.

Rather than asking separate screener questions about previous and current addresses, a single screener is asked. Text substitution is used within the guidance to ask respondents to include incidents that occurred at any permanent previous address in the last 12 months. There is no requirement to report previous and current home data separately and this change will not result in any data loss. It aims to reduce burden on respondents who have moved home in the last 12 months.

Structural changes to burglary and theft from inside home questions 

We have redesigned the burglary screener questions to simplify wording and align more closely with the definition in the coding manual. The current CSEW asks three screener questions, which capture burglary and attempted burglary:

During the last 12 months, that is [since the first of 'date'] has anyone GOT INTO this house/flat without permission and STOLEN or TRIED TO STEAL anything?

[Apart from anything you have already mentioned] in that time did anyone GET INTO your house/flat without permission and CAUSE DAMAGE?

[Apart from anything you have already mentioned], in that time have you had any evidence that someone has TRIED to get in without permission to STEAL or to CAUSE DAMAGE?

Burglary is defined in the coding manual as entry to the home without permission, regardless of whether anything is stolen, there is an attempt to steal, or damage is caused. Therefore, the simplified redesigned screener questions separate entry without permission from theft and criminal damage. This allows the survey to capture burglary incidents that the current CSEW screener questions do not, for example, someone entering the respondents' home without permission but without theft, attempted theft or criminal damage:

Since 1st [month, year], has anyone got into, or tried to get into, your home WITHOUT permission?

1. Yes

2. No

(Display if 'skipped')

Or

3. Don't know

4. Prefer not to say

What happened?

You can select more than one answer if these happened in separate incidents

1. Someone got into my home without permission

2. Someone tried but failed to get into my home without permission

Did they steal anything?

1. Yes

2. No

(Display if 'skipped')

Or

3. Don't know

4. Prefer not to say                

The question "Did they steal anything?" is included, not to determine whether an incident is a burglary (as it is, regardless of whether something is stolen), but to allow respondents to report experiences that may be salient to them. This prevents the theft being reported at a different, incorrect screener question if the respondent wanted to report it but did not see a screener to directly capture it.

To establish whether the perpetrator had permission to enter the home, the redesigned screeners ask a separate question to address burglary by someone who had permission to be there. If a perpetrator had permission, it can only be coded as a theft, or attempted theft, from a dwelling if something was stolen, or an attempt was made to steal:

Since 1st [month, year], has anyone stolen, or tried to steal, something from inside your home when they HAD permission to be there?

The redesigned questions aim to reduce respondent burden and only ask questions necessary for offence coding. Data on burglary incidence may be improved as the redesigned questions capture burglaries that the current CSEW does not.

Other wording changes to household's residence screeners

We have simplified and clarified the questions about crimes outside the home that are elsewhere on the property. The current theft from outside a dwelling screener is:

And [apart from anything you have already mentioned], in that time was anything (else) that belonged to someone in your household stolen from OUTSIDE the house/flat - from the doorstep, the garden or the garage for example?

The redesigned screener is:

Since 1st [month, year], has anyone stolen, or tried to steal, something from elsewhere on your property?

This could be from your doorstep, garden, garage or shed. Please do not include theft relating to vehicles.

1. Yes

2. No

(Display if 'skipped')

Or

3. Don't know

4. Prefer not to say

What happened?

You can select more than one answer if these happened in separate incidents

1. Someone stole something from elsewhere on my property

2. Someone tried but failed to steal something from elsewhere on my property

Verian (2022) found that the existing wording "outside the house/flat" could be interpreted as any location away from the property, so the wording was changed to "elsewhere on your property". "Shed" has been added as another example of detached outbuildings that should be included.

A comparison of current and redesigned wording for all screeners can be found in Appendix 4: Current and redesigned screener questions.

4.5 Changes to vehicle questions

We have made changes to the vehicle screener questions to improve the accuracy of the incidents captured. For example, the current screener question for vehicle damage is:

And [apart from this], in that time [have you had your/has anyone had their] vehicle deliberately tampered with or damaged by vandals or people out to steal?

The redesigned question is:

Since 1st [month, year], has anyone deliberately damaged [your / your or someone in your household's] [car, motorcycle or other motor vehicle]?

The wording has been simplified to capture incidents where the perpetrator is not perceived to be a "vandal" or "someone out to steal". Currently, incidents may be excluded, depending on respondents' interpretation of these terms. The scope of the question has been broadened to align with the definition of criminal damage in the coding manual, which does not state that the perpetrator needs to be perceived in a particular way. Therefore, prevalence and/or incidence rates may increase as a result of this change.

4.6 Changes to personal crime screeners

The personal crime screeners capture both crimes against the person, and their property away from home. Minor changes to these questions are outlined in Appendix 4: Current and redesigned screener questions. This section describes the changes to the questions about sexual assault, assault and threats.

Order of questions on crimes against the person

Currently, the threat and violence screeners are ordered as follows:

  1. Use of force or violence (assault).

  2. Threats.

  3. Sexual assault.

  4. Household violence.

We considered the possibility of a question order or context effect resulting from the new approach, allowing respondents to report MFIs at all relevant screeners. If respondents are asked first if they had experienced an assault in an online mode, they may answer "yes" to report a sexual assault and risk counting the same offence at a subsequent screener. This would not be a "true" MFI in the way we now determine, where two or more different offence types occur in a single incident.

Rather, it would allow a single offence in a single incident to be recorded twice. While this situation should be dealt with by identifying MFIs and prioritising one offence (see section 3.1), our aim is to prevent burden and potential confusion because of additional questions being asked unnecessarily. Alternatively, they may not report the sexual assault at the sexual assault screener, having already reported it. To mitigate these potential effects, the ordering of these crimes has changed to:

  1. Sexual assault.

  2. Physical hurt (assault, previously use of force or violence).

  3. Threats and intimidation.

The household violence question has been removed and will be discussed later in this section.

Sexual assault

The current screener question about sexual assault is only asked at Wave 1 and, because of its sensitivity, is presented to the respondent on a showcard:

During the last 12 months, have you been sexually interfered with or sexually assaulted or attacked, either by someone you knew or by a stranger?

The question is not asked at Wave 2. An aspect of our work to assess online feasibility of the screener and victimisation modules includes whether this screener question can be asked online at Wave 2. We have redesigned it, aiming to simplify the language to focus on sexual assaults, rather than "interference" or "attacks":

Since 1st [month, year], has anyone sexually assaulted you, or tried to sexually assault you?

1. Yes

2. No

(Display if 'skipped')

Or

3. Don't know

4. Prefer not to say

What happened?

You can select more than one answer if these happened in separate incidents

1. Someone sexually assaulted me

2. Someone tried to sexually assault me

This screener aims to collect a range of experiences of sexual assault, including rape, attempted rape and indecent assault. Cognitive testing will be needed to check whether respondents understand the question as we intend, and feel able to report sensitive experiences.

Depending on the outcome, further consideration could be given to whether a version of the question can also be included in a telephone version, if required in future.

Assault and threats

The complexity of separating attempted assault, assault and threats, and harassment has been highlighted in research by Verian (2018; 2022), as well as changes made to the threats screener question on the Telephone Crime Survey for England and Wales (TCSEW) during the coronavirus (COVID-19) pandemic. These concepts can be hard to differentiate, which makes it difficult to design mutually exclusive screener questions. To try to address some of these issues, we have proposed changes to the wording of the existing questions on assault, including the use of force or violence and threats.

Assault and attempted assault

The current screener question for assault asks whether anyone has used deliberate force or violence against the respondent, with or without a weapon or object:

And again, [apart from anything you have already mentioned], since the first of ['date'] has anyone, including people you know well, DELIBERATELY hit you with their fists or with a weapon of any sort or kicked you or used force or violence in another way?

Assault can include spitting, pouring a glass of water over someone or setting a dog on someone. No injury is necessary.

The current CSEW does not include a question on attempted assault, despite it having an offence code. Attempted assaults are not the same as threatened assaults. The CSEW definition of attempted assault is where there was an attempt to use force, or where the respondent was threatened with a weapon. Currently, identifying attempted assault relies on respondents recording their experience at other screener questions and information collected in the victimisation module.

Verian (2018, Table 5a) noted that of the 606 incidents coded as attempted assault between 2010 and 2015, 84% were recorded at a screener question designed to capture threats rather than assaults. Only 13% of attempted assaults were picked up by the assault screener question. To address this, Verian (2018) recommended introducing a paired screener question covering attempted assault alongside actual assault. Testing revealed that this format improved comprehension as it helped the respondent differentiate between the two.

This was also trialled in Verian's (2022) research, along with paired screeners for other crime types. However, their analysis suggested paired screeners might have led to more double counting. As discussed in section 4.3, we have built on Verian's (2022) suggestion by combining actual and attempted assault in one screener, with an immediate follow-up question asking whether they had experienced actual assault, attempted assault or both. By explicitly asking for attempted assaults upfront, it is anticipated that misreporting attempted incidents as actual incidents will be reduced.

The wording of the redesigned question has also been simplified. For example, we have replaced "including people you know well" with guidance at the beginning of every screener section (see section 4.1). We have also replaced "deliberately" with "on purpose" and "hit you with their fists or with a weapon of any sort or kicked you or used force or violence in another way" with "physically hurt you":

Since 1st [month, year], has anyone physically hurt you, or tried to hurt you, on purpose?

It did not have to cause you injury.

1. Yes

2. No

(Display if 'skipped')

Or

3. Don't know

4. Prefer not to say

(Ask if above is 'yes')

What happened?

You can select more than one answer if these happened in separate incidents

1. Someone physically hurt me on purpose

2. Someone tried to physically hurt me on purpose

These changes aim to reduce respondent burden and satisficing by significantly reducing the word count and the potential "focusing hypothesis" (where the use of examples may constrain a respondent's answer). The impact of this would need to be cognitively tested.

"Common assaults" (where there was no or negligible injury) may be captured at the Step 2 "What happened" "attempted" response option, rather than the "actual" option. Details in relation to the experience of assault, for example, the level of injury or whether a weapon was involved, would be asked in the victimisation module. This is to determine whether the respondent experienced an attempted assault or an assault without injury.

We considered whether a follow-up question to the screener could identify whether an injury was sustained. However, as this would only be necessary for a very small number of respondents with more than six victim forms in the algorithm pot, we decided not to include it.

Assault with theft (robbery)

The coding manual defines a theft with force or attempted force as a robbery. In the current CSEW there is no robbery screener question, and robberies are most likely recorded as an assault or theft and identified as robbery in the victim form.

We do not propose using the term "robbery" in the redesigned screeners. Our mental models research found that participants used the terms "theft" and "robbery" interchangeably and not always in line with the coding manual definitions. For example, the term "robbed" was used to describe a mobile phone being stolen without the participant's knowledge (which would be coded as a "stealth theft"). As a result, we did not use the terms "robbery" and "robbed" in the redesigned screener questions, or isolate robbery in a separate screener (despite there being a "robbery" code).

As we now capture MFIs in the screener module, we can derive if theft and violence (including attempts) occurred at the same time prior to the victimisation module. This means we can more accurately prioritise assaults and robbery.

Threats

The current screener question aims to capture both verbal and written threats that have frightened the respondent:

And [apart from anything you have already mentioned], in that time, has anyone THREATENED you in any way that actually frightened you? Please include threats that have been made by any means, for example in person, on-line or over the telephone

Threats are defined as incidents where no force is used. If force or attempted force is used, this is coded as assault, attempted assault or a sexual offence, depending on the type of force used. A threat that involved a weapon should be coded as an attempted assault, if the threat occurred in person. We considered whether a follow-up question to the screener could identify whether a respondent was threatened with a weapon. However, as this would only be necessary for a very small number of respondents with more than six victim forms in the algorithm pot, we decided not to include it.

During the coronavirus (COVID-19) pandemic, amendments were made to the threats screener as part of the TCSEW design. This included the addition of "harassed" and "intimidated" in the question:

And [apart from anything you have already mentioned], in that time, has anyone threatened, harassed or intimidated you in a way that was intended to cause you alarm or distress?

Please include threats, harassment or intimidation by any means - for example in person, online, over the phone, or on social media.

The aim of these changes, as described in the Centre of Crime and Justice's (CCJ's) Comparability between the Telephone-operated Crime Survey for England and Wales and the face-to-face Crime Survey for England and Wales methodology, was to capture any harassment or intimidation during lockdown restrictions, as there was speculation about an increase in levels of harassment. The change of wording resulted in an increase of the range and number of offences that were captured, in particular violence without injury. However, it is not possible to distinguish whether this was because of a genuine change in offences occurring. As a result, when the CSEW resumed, the question reverted to the original wording.

As well as the term "harassment", insights from the TCSEW also suggest caution when using the term "intimidation". However, "intimidation" was used spontaneously by participants in our mental models research to describe threats and is included within offence codes. Therefore, the threats screener question redesigned for cognitive testing is:

Since 1st [month, year], has anyone threatened to hurt you, or intimidated you in any other way?

This could have been in person, online, over the phone or on social media.

We have already recommended the introduction of a combined screener for actual and attempted assault, to differentiate between attempted assault and threats or assault without injury. The redesigned threats question is placed at the end of the section, with the wording "in any other way", aiming to encourage respondents to record all outstanding experiences. Testing is required to determine whether this question will cause out-of-scope experiences to be recorded, for example, when a respondent has felt intimidated by a situation, rather than someone saying or doing something intimidating.

Potential inclusion of stalking and harassment

Stalking and harassment are currently captured in the CSEW by specific modules asked after the screener and victimisation modules:

  • Stalking is a subsection within the self-completion module on Domestic Abuse, Sexual Victimisation and Stalking, administered to all respondents except those who spontaneously refuse to undertake the self-completion modules

  • a Harassment module, interviewer-administered to a subsample of responders, was introduced in 2022 (for more information see CCJ's Experiences of harassment in England and Wales: December 2023 bulletin)

  • as part of the move to a longitudinal panel design, self-completion and some interviewer-administered modules are only asked at Wave 1 to keep the questionnaire to an acceptable length and reduce ethical concerns associated with asking sensitive questions during a telephone interview

During the screener redesign, ONS's CCJ identified a potential requirement to include stalking and harassment in the main estimates, which would require these topics to be incorporated in the screener and victimisation modules. The requirement has not yet been confirmed, nor any detail established. However, some of our preliminary thoughts and considerations include:

  • how stalking and harassment may overlap with existing screeners, particularly the threats question

  • whether these three topics could be asked in one screener (with specific details asked later), or in separate screeners

  • whether stalking and harassment should be included in the main estimates or kept separately as a self-completion module

Although our work to date provides some insight, clarification of data requirements and, potentially, further mental models research may be required before these questions can be fully considered. The time and resource required, and when the new questions should be developed - before, during or after cogability (combined cognitive and usability) testing of the whole module - will also need to be considered.

Our mental models research found that participants did not always associate the concepts of "stalking" and "harassment" with repeated incidents (as per the Home Office Crime Recording Rules (HOCR) definition). Terminology was also used interchangeably to describe experiences of harassment and threats, for example, "threatening" and "intimidating" were used in relation to both crimes. This suggests respondents may find it difficult to differentiate between the two concepts. As the harassment module includes the word "intimidation" throughout, it is important to consider potential question context effects and conflation with the screener module.

Domestic abuse

In the CSEW, domestic abuse is not reported as a discrete output in the main estimates and there is no specific offence code for it. The closest measures are the outputs published for violent incidents in relation to the victim's relationship to the perpetrator. This is defined as other household members, acquaintances, and strangers. A separate computer-assisted self-completion (CASI) module is included at Wave 1 to collect data on domestic abuse, covering violence and other types of abuse.

There is some overlap between the screener and self-completion module, as offences recorded in the screener may be experienced as a part of domestic abuse. For example, domestic abuse might involve incidents such as threats, assaults, theft, criminal damage and fraud. Currently, if the same experiences are reported in both the main screener and the self-completion module, double counting is not a concern as both outputs are published independently. However, this potentially causes burden on respondents telling us about the same incident twice.

Alongside the development of screener questions, the CCJ conducted a separate research programme to generate the best measure of domestic abuse. The resulting questions in the self-completion module (Wave 1 only) recognise how coercive control can often be interconnected with other domestic abuse crimes, and that a standard screener question is not appropriate.

During the Qualitative and Data Collection Methodology (QDCM) team's production of this report, the CCJ have been considering the inclusion domestic abuse and other crime types associated with violence against women and girls (VAWG) within the main estimates of crime. Until now, only crimes collected through screener questions have been included.

This would be easier to achieve without overlap between the self-completion and screener questions. The CCJ already discourages use of the domestic violence figure from the screener questions, for example, this figure is not referenced within the bulletin.

Removing domestic violence incidents from the screener would reduce the incident estimate coverage and affect the headline incident measure. This requires careful consideration and stakeholder consultation before making such a substantial change.

Hester and others' (2021) research into domestic abuse found that respondents were able to follow being asked to think about partners or ex-partners and family separately. Therefore, consideration should be given to whether it is possible to ask respondents to only report incidents committed by partners, ex-partners, or family after the screener module.

Changing the method by which domestic abuse and VAWG estimates are collected would require the addition of the Wave 1 CASI module to the questionnaire for Wave 2 onwards.

We have not had time to fully consider the impact of this potential change to the method of producing survey outputs on our redesign of the screener module. A pause and review will be carried out to fully clarify the data requirements, adjust the design as necessary, and review the proposed schedule of development activities (see Section 5: Further research and development work).

The following two paragraphs relate to the redesign as it currently stands, but these considerations will need to be reviewed in light of these changes.

A person who has experienced what the law considers domestic abuse may not consider themselves a victim as defined in law or counted by the CSEW. For this reason, we do not use the term "domestic abuse" in question wordings or in guidance and introductions. Instead, we have repeated the guidance for respondents to include offences committed by people they know, or do not know, to encourage reporting. As discussed in section 4.1, we hope this will also capture experiences of hate crime, which is reported through individual screeners rather than a discrete output.

Our mental models analysis found that participants generally considered domestic abuse holistically. As a result, respondents may have difficulty accurately counting repeat incidents or separating individual screener incidents within a series or multi-feature series (MFS), required by the screener module questions. The existing complexity in counting incidents may be helped by our new approach to recording MFIs and series crime. However, this will require consideration during cognitive testing.

Within-household violence question

The existing CSEW has a question about within-household violence in the physical violence section:

Apart from anything you may have already mentioned, during the last 12 months, has any member of your household (aged 16 or over) deliberately hit you with their fists or with a weapon of any sort or kicked you, or used force or violence on you in any other way?

Although this question overlaps with the use of force or violence question, it is asked on a showcard in face-to-face interviews for discretion and privacy. It gives the respondent a second opportunity to report incidents that they did not at the earlier verbally-administered question. If they did report it earlier, the current approach of not double counting the same incident would apply.

As the online design is self-completion, the household violence question has been removed because the previous assault questions will have collected such incidents. This removes the risk of double counting, which could occur as the redesigned screener approach omits the "Apart from anything you may have already mentioned" clause.

Further consideration will be given to how the redesigned sexual assault and physical hurt questions should be administered in other modes (see section 5.3).

4.7 Fraud 

The existing fraud section can be complex and repetitive for respondents. This has been evidenced by Verian's research, the mental models analysis, and by listening to and observing CSEW interviews. For this reason, it has been redesigned to simplify questions, route out incidents where the respondent is not the Specific Intended Victim (SIV) earlier (further details can be found later in this section) and prioritise the generation of victim forms more efficiently.

Removal of fraud screener repetition

If a respondent experiences a "traditional" crime, such as a theft from their person, and a subsequent fraud occurs where money is stolen from their bank account using the card that was taken, two victim forms would be generated for completion. The data requirement is that these crimes would be coded as two separate incidents.

Currently, if a traditional screener has been answered "yes", two sets of repetitive fraud screeners are asked. First, the respondent is asked if any of five types of fraud or computer misuse happened as a direct result of any previously reported incident. Further questions then ask if the same five fraud types were experienced when not resulting from previously reported incidents.

However, there is no data requirement to identify whether a fraud incident was the result of a traditional crime incident. Therefore, the redesigned questions have been simplified to ask one set, without establishing whether any fraud arose from a traditional crime. This reduces respondent burden and ensures that only necessary information is collected. Cognitive testing should aim to explore any potential effect on response processes, for example, whether a respondent understands that they should report fraud arising from both a traditional crime, and one that is experienced independently, at the same question.

Simplifying questions

Currently, the CSEW divides fraud into five types:

  • having personal information or account details used, or an attempt made to use them, to obtain money or buy goods or services

  • being tricked or deceived out of money or goods (in person, by telephone or online)

  • someone trying to trick or deceive a respondent out of money or goods, in person by telephone or online

  • personal information or details being accessed or used without permission

  • a computer or other internet-enabled device being infected or interfered with, for example, by a virus

As discussed in section 2.4, our user journeys work found that the fraud-related scenarios did not neatly align with any one type of fraud as they were not mutually exclusive.

Although the existing CSEW tries to overcome this by asking if two or more fraud types were "related", and if so, only coding the higher priority, this does not reduce respondent burden upfront. Therefore, we identified a need to reduce overlaps and restructured the fraud questions in a similar way to the proposed design for the traditional screeners. The following redesigned screener questions combine four of the existing CSEW screeners, but aim to collect the same data (in combination with the questions discussed later in the section):

Perinfa

Since 1st [month, year], has anyone used, or tried to use, your personal information or account details without your permission or knowledge?

Please do not include phishing you did not respond to, such as emails or phone calls.

Deceiva

Since 1st [month, year], has anyone tricked or deceived you, or tried to trick or deceive you, out of money or goods?

This could have been in person, online, over the phone or on social media.

Please do not include phishing emails or phone calls you did not respond to. Please include incidents where you later got money or goods back.

(Note that from this point, when showing question wordings we sometimes include the variable names used in programming the questionnaire, such as Perinfa and Deceiva in this example. We sometimes use the variable names for shorthand in the text.)

As with the traditional screener questions, a "What happened" question is asked after each screener to identify whether it was an actual or attempted fraud. The computer interference or infection screener is still asked separately.

Identifying monetary loss and improving prioritisation

The existing fraud questions do not enable the prioritisation of incidents in which a respondent lost money, despite this being a priority (during offence coding) over incidents of the same type with no loss. Questions on loss are only asked in the victimisation module. This means that more recent cases of fraud without loss are potentially being prioritised over those with loss (and incidents where a loss was reimbursed).

To overcome this issue, an additional question has been added after Perinfa and its "how many times" question to allow fraud incidents to be prioritised more effectively (Deceiva already asks if money has been lost):

In how many of these incidents did you lose any money, even if you got it back?

If you didn't lose any money, please type '0'.

Routing out the Specific Intended Victim

If a respondent received a phishing email, mailshot or cold call but did not respond to or engage with the communication, for example by clicking a link or providing further details, they would not be considered the Specific Intended Victim (SIV). A respondent must be the SIV for a fraud code to apply. Despite the inclusion of wording to exclude such experiences, CSEW data tell us that this is often not successful. A high number of victim forms are coded as out of scope because of the respondent not being the SIV, indicating that unnecessary burden is placed on them.

Verian (2022) found that an online mode resulted in a higher count of out-of-scope fraud than the telephone mode. Our mental models research found that participants were uncertain about how to define various fraud-related incidents, including those involving phishing attempts. When redesigning the screener questions, we considered a range of these fraud scenarios to test the new approach to routing out non-SIVs.

The redesigned questions aim to route out non-SIV respondents before the victimisation module more effectively. Guidance is displayed at Perinfa asking respondents to exclude phishing they did not respond to, such as emails or phone calls. However, if a respondent answers "yes" to Perinfa or Deceiva despite this guidance, they could still be taken through to the victimisation module unnecessarily, in a similar way to the current survey.

To reduce the chance of this happening, the redesigned attempted fraud questions ask if the respondent was ever contacted by the perpetrator and whether they responded to any communication, for example, by clicking a link or calling a number.

Frcont1

In any of the incidents where someone TRIED TO use your details, did they contact you?

Please do not include phishing you did not respond to, such as emails or phone calls.

Sivchek1

Ask if Frcont1 = "yes"

Did you respond to any communication, for example, by clicking a link or calling a number?

If the respondent was contacted but did not respond, we can deem this experience out of scope. The "how many times" count would be adjusted accordingly, and a victim form would not be generated. If a respondent was not contacted, they would still be routed to a victim form as fraud could have occurred initially without their knowledge, but without communicating with the perpetrator. Further questions would be asked in the victim form to ensure the incident was in scope.

Based on our mental models research, we considered adding guidance for respondents to report incidents of fraud, even if they thought it was their fault. Upon further consideration, we decided not to add this to avoid inadvertently making respondents feel the fraud was their fault.

Back to table of contents

5. Further research and development work

The redesign of the screener module for online mode is the first part of a programme of research and development of the screener and victimisation modules for the longitudinal, mixed-mode survey design.

5.1 Prototype programming and cogability test of online screener module

A prototype of the online screener specification has been programmed in Blaise 5 software, in preparation for cognitive testing. The timetable and full scope of the iterative cognitive testing is yet to be finalised, but will focus on exploring: 

  • the newly proposed screener structure to see how well this maps participants' mental models of their crime experiences

  • the accuracy of the new structure and algorithm in producing the correct number of victim forms for the correct incidents

  • how the terminology used within screener questions is understood, and how this relates to the accuracy of answers

  • participants' experience and levels of burden while completing the screeners

  • accessibility for different users

  • usability across different devices (for example, a mobile phone, tablet or laptop)

  • implications for subsequent redesign of the victim module

The testing would take a "cogability" approach to explore both cognition, comprehension and subsequent response processes, and the usability of the survey instrument. We aim to have a diverse, purposive sample to test the redesigned screener module, including participants with potentially sensitive and complex crime experiences.

Prior to cogability testing, we will schedule a pause and review to consider a potential change in method for the collection and output of domestic abuse and violence against women and girls (VAWG) (see section 4.6). The screener specification and programmed prototype will likely require revision and the changes will be incorporated into the cogability test.

After cogability testing, we will assess the feasibility of collecting screener information online at Wave 2 onwards from all respondents.

5.2 Further steps

Should findings from cogability testing indicate feasibility, we have outlined potential further stages of work.

  • A quantitative test of the online screener module to assess the module's performance in the field providing broad estimates of prevalence, and potentially of incidence, for each screener (offence coding would not be possible); a qualitative follow-up could be included to validate responses.

  • Review data requirements with the aim to simplify the survey and reduce respondent burden.

  • Optimise the redesigned screener module for face-to-face mode.

  • Redesign and cogability testing of the victimisation modules (traditional and fraud) for an online mode; this would need to account for changes resulting from the new screener design.

  • Because of the length of the existing victimisation modules, cogability testing may need to be limited to, for example, optimising show cards from face-to-face to online and telephone modes and reviewing the open description question.

  • One or more quantitative tests, such as a Beta test of the full online screener and victimisation modules or parallel run of the full mixed-mode design and current CSEW.

These stages are in line with the Respondent Centred Design Framework and are subject to funding, resource, timetables and methodological design. To address the complexity of survey design and data requirements, we recommend a comprehensive programme of further research to minimise potential risks to data quality through various forms of non-sampling error, such as:

  • measurement error, including mode effects

  • processing error, for example, in offence coding

  • unit and item non-response including attrition and break-off

Each research stage would depend on the outcome of the previous stage. There is no guarantee of feasibility for online collection to meet all of the existing data requirements, particularly for more complex crime profiles. Regular reviews would assess progress and whether to proceed or revise plans, scope or data requirements.

Should the screener cogability testing indicate it is not feasible to collect the full screener or victimisation modules online from all respondents, other options would need to be considered, such as:

  • whether online collection is discounted altogether with the focus given to improving face-to-face interview mode

  • to collect online at Wave 2 onwards but divert respondents with complex experiences to an interviewer mode within wave, which would be logistically challenging

  • to reduce the complexity of the data requirements, such as by measuring prevalence but not incidence, or reducing the amount of detail collected 

5.3 Optimisation

To determine the feasibility of collecting data online at Wave 2, we have redesigned the screener module to work for self-completion and on small-screened devices. An "online, mobile-first" approach is often used when designing questions for mixed mode surveys because if it is effective, they are likely to work on larger screened devices and across interview modes.

The longer-term intention is to design questions that provide equivalent stimulus to respondents and obtain equivalent data in all modes and across waves. Questions do not need to be identical but can be adapted or "optimised" to take account of inherent features of each mode. Differences in data collected between modes and across waves can be caused by question design and mode effects on measurement error. These cannot be eliminated, but we can try to reduce their likelihood.

As far as possible in this stage of research, we considered how the redesigned questions might work in face-to-face and telephone modes, and in the panel design. However, once work on the online screener and victimisation modules has been completed, further optimisation may be required.

If the research and development show that online collection is not feasible at all, or only if subject to substantial changes to data requirements and design, improvements can be made for interview modes, as identified in our Discovery phase and Verian's research.

We have identified some aspects of the redesign to consider and explore further in the optimisation phase, these include:

  • interviewer and respondent burden could be high if the respondent's crime experience is complex and numerous screeners have been answered "yes", (particularly at the multi-feature incident (MFI) questions, which may require a high number of incident combinations to be read)

  • whether twisties (drop-down display lists, see section 3.1) will be available for interviewers to remind respondents of what they have answered and whether the current design, which displays a summary at the end of the screener section, should be retained

  • whether the "What happened?" questions need to be reworded to incorporate the answer options in the question stem; this will make them more appropriate for interviewers to read out

  • assessing, for each mode, whether sensitive screeners, such as violence and sexual offences, should be included, and if so, how they should be administered

  • adjusting introductions, guidance and instructions to reflect the mode or wave

5.4 References

Hester M, Fahmy E, Matolcsi A, McCarthy L, Myhill A, Panteloudakis I, Walker S-J (2021), "Research to support the redevelopment of survey questions on domestic abuse", University of Bristol.

5.5 Cite this methodology

Office for National Statistics (ONS), released 2 April 2025, ONS website, methodology, Crime Survey for England and Wales Transformation - Discovery Part 3: Redesign of the Screener module

Back to table of contents

Contact details for this Methodology

ONS Centre for Crime and Justice
crimestatistics@ons.gov.uk
Telephone: +44 2075 928695