Sagitta Market Research Ltd

Consumer Quantitative Research

Tel: +44 (0)1303 262259 - Email:


Decoding Market Research Jargon A-Z (Part I, A-C)

By Helen Lester, Sagitta Market Research Ltd.

Market research, like most industries, has its own jargon. However, whilst as research professionals we use the terms readily, we appreciate clients outside the industry may not be so familiar with some terms. Over the coming months, we will therefore be providing a helpful guide with the aim of demystifying some of the terminology often used in the industry. This month, we are covering terms beginning with A, B or C. If you have any terms you would like explained or added, do contact us at

Analysis. Evaluating respondents’ answers (research results) in aggregate form or by various characteristics, in order to provide an understanding and recommendations in a response to the research objectives.

Brand mapping. A research technique where respondents are asked to position different brands based on the relationship between perceived key characteristics (e.g. product quality, innovation, environmental friendliness or price). This helps clients to understand the relative strengths and weaknesses of different brands and how they are perceived in relation to one another.

CAPI (Computer-Assisted Personal Interviewing). When face-to-face interviewers record answers to questions using laptops or tablets (e.g. iPads) in place of the more traditional pen and paper method.

CATI (Computer-Assisted Telephone Interviewing). CATI involves telephone interviewers typing respondents’ answers directly into a computer-based questionnaire, rather than recording them on a paper-based questionnaire.

CAWI (Computer Assisted Web Interviewing). Similar to CATI, but rather than the results being stored on the computer or iPad, they are entered onto a website browser and stored directly on a server for instant, real time results.

Closed question. A closed question is one where the respondent is not given the opportunity to elaborate on their answer (as opposed to an open-ended question), the answer being recorded against a predefined list of answers. An example would be: Could you tell me which, if any, foundation brands you have heard of? A closed question may be unprompted, as per our example (where the respondent answers without the assistance of a list of answers) or prompted (where the respondent is given a list of answers to choose from).

CLT (Central Location Testing or a Central Location Test). A type of research where consumers test a product in a central venue (see hall test).  Participants are recruited (e.g. in the street) according to set criteria based on the characteristics of the target consumer profile.

Cluster analysis. A method used to classify people or items into mutually exclusive groups based on two or more attributes (e.g. social classification).

Coding. Part of data processing process, whereby responses from open-ended questions are grouped with comparable answers and these are then categorised using numerical codes.

Concept boards. Used in qualitative research (e.g. focus groups, hall tests or in-depth interviews) concept boards depict designs of products, packaging, adverts or brand names.

Confidence interval (or margin of error). The range within which the percentage (value) giving a specific answer would fall if the whole population were interviewed. Usually calculated to the 95% confidence interval.

Conjoint analysis. A statistical technique used to demonstrate the relative importance of one rated characteristic over another. Often used in product development research to ascertain which product features are critical to purchase intention (versus others which could be dropped without significant consequence).

Continuous research (or longitudinal research). Research conducted in several phases (or waves) to establish trends and evaluate how opinions may be shifting over time. Continuous research typically involves asking the same questions to the same individuals each wave or asking the same questions to people who share similar characteristics (e.g. customer profile).

Correlation. The relationship between two attributes.

Cross-break (see cross-tabulations). Analysis of data tabulations by two or more attributes. Data tabulations typically include cross breaks so researchers can look at how people differ in their opinions depending on their profile (e.g. gender, age, product usage etc.).

Cross-sectional research. Cross-sectional research is the opposite of continuous research (or longitudinal research). A cross-sectional research study collects data in a single phase, as opposed to over several periods of time (phases).

Cross-tabulations (or contingency table). When one question is crossed with another question in the data tables. It shows researchers how the answers to one question are relative to the answers of another question. For example, one question may record the respondent’s gender (male/female) and another may ask purchase intention of a product. This cross-tabulation would show the purchase intention by gender.

CUT (Consumer Use Testing or a Consumer Usage Test). CUT can be carried out in-home, at a central location (e.g. hall test) or at another venue suitable for testing the product.


CUT, HUT and CLT – what do they mean and which should I use?

By Helen Lester, Sagitta Market Research Ltd.

In previous blogs we have looked at Product Testing Research – how it has become more important than ever before (see blog ‘The importance of product testing research in our e-society’) and how to ensure multi-country product test studies are a success (see blog ‘Best Practice Guide: International product testing – how to ensure your research is a success!’). This month, we look at Consumer Use Testing (CUT) specifically and review the possibilities.

Decoding the jargon
Firstly, we need to consider a few acronyms and their associated meanings. As mentioned, CUT refers to Consumer Use Testing – or a Consumer Usage Test. CUT can be carried out in-home, at a central location (e.g. hall test) or at another venue pertaining to where the product may be used.

IHUT (often simply referred to as HUT) stands for In-Home Usage Testing (or Home Use Test), whereby the product is tested in the participant’s own home.

CLT – perhaps better known as Central Location Testing (or a Central Location Test) – is where consumers test a product in a central venue. This is often called a hall test. Participants are recruited (e.g. in the street) according to set criteria based on the characteristics of the target consumer profile (e.g. female, aged 18-25, who has a manicure at least once per month).

All the terms – CUT, IHUT, CLT – refer to, in essence, consumer use tests. In other words, the participant trials a product, evaluates it and provides feedback. The feedback may be carried out via a face-to-face pen and paper interview, online survey, telephone interview or self-completion questionnaire (including a diary).

Which option is best?
CLTs offer the opportunity for a fast turnaround of results – the fieldwork can be completed in a day, with results following the same week. Central Location Testing is also very cost-effective, as a large number of individuals can be interviewed in one day. If clients are looking to shortlist a number of products to pursue, or assess how a product might be accepted in the marketplace, CLTs are a useful methodology.

However, HUTs provide important benefits over CLTs, in that the consumer uses the product in its natural environment – namely, the real life setting of their home. This arguably produces more faithful evaluations in respect of product assessments and product satisfaction. In addition, as In-Home Usage Testing affords the possibility of a longer test period (e.g. a week or even weeks) than Central Location Testing typically provides, participants have the time to really experience the product and evaluate it in-depth through the use of daily self-completion diaries and follow-up evaluation questionnaires administered by an interviewer. HUTs also provide more time for reflection – for example regarding product development suggestions. In-Home Usage Testing provides the client with an opportunity to measure a consumer’s first impression of a product and compare this to the subsequent experiences of it during the test period (which typically lasts at least a week). These measures can then be compared to the consumer’s overall impression at the end of the test period – culminating in an “accept” or “reject” conclusion (which can also be compared to a similar measurement taken pre-trial). HUTs are therefore an excellent way of testing a product before it is launched, as they provide in-depth insights and useful suggestions about ways in which the product may be improved.

So, both HUTs and CLTs have their own benefits and merits. It really comes down to the stage your product development is at, how quickly you need the results and what budget you have. Sagitta has extensive experience of both types of product testing. Call us to discuss your requirements and we can advise which option may suit you best.

The implications of a potential Brexit for the European market research industry

By Helen Lester, Sagitta Research

As a UK market research agency, we have many clients in Europe. On 23 June 2016, the nation will vote in a referendum on whether it should stay in or leave the European Union (EU). There is much debate about what a potential Brexit would mean for the British – and EU – economy. In addition, there are the various political and social ramifications of Britain ‘going it alone’. But what impact would a Brexit have on the European market research industry?

Firstly, let us consider how the EU came into existence. The bare bones of the EU were established after World War II (WWII) in order to create a strong economic and political partnership between the member countries. The rationale was that countries that trade together are more likely to avoid conflict. Six countries originally founded what was to become known as the EU – namely, Belgium, France, Germany, Italy, Luxembourg and the Netherlands; with Denmark, Ireland and the United Kingdom joining the common market (as it was then known) in 1973. A referendum held in 1975 resulted in a majority voting in favour of the UK remaining a member state. There are currently 28 European countries within the EU and it is now also a ‘single market’, whereby goods and people can move around without restriction. The EU has its own parliament, which sets laws in various areas. Nineteen of the member countries also use the EU’s own currency – the Euro – although of course the UK does not.

Although two of the most financially robust countries in the world – namely Switzerland and Norway – are thriving despite (or perhaps because of) not being part of the EU, no nation state has ever left the EU[1]. So what would it mean if the British people were to vote in favour of a Brexit? More specifically, what would be the implications for market research in Europe? The answer to this really depends on what you believe would be the benefits and/or drawbacks of Britain leaving the EU. For example, the ‘remain camp’ maintains that many large corporations, especially those in the manufacturing industry, would find it difficult to retain a presence in the UK due to the potential restrictions on the free movement of workers, in addition to the tax, legal and trade implications of a Brexit. Just this month, BMW hinted it might withdraw from the UK if we were to leave the EU, when it sent a letter to staff indicating the possibility of job losses in the event of a Brexit. If many large corporations did scale back – or completely remove – their operations from the UK, this could, in turn, affect the proportion of pan-European market research studies that include a fieldwork element in the UK, thereby potentially reducing the amount of fieldwork conducted by UK agencies. Moreover, if the exchange rate were to be adversely affected (as was the case when the referendum date was confirmed by David Cameron), fewer fieldwork surveys might be commissioned by UK companies in Europe due to the unfavourable costs involved.

However, the ‘out camp’ argues that in the event of a Brexit, it is unlikely the UK would make a full withdrawal from Europe in any case. Norway and Switzerland, for example, are members of other associations that safeguard their economic interests within Europe[2]. Under the Lisbon Treaty, the UK would have two years to negotiate a withdrawal treaty, which could allow, for example, the establishment of bilateral agreements with the EU to protect trade, by enabling the free movement of people for work purposes, easing customs procedures and duties, etc. In addition, it is possible that trade would not be nearly as adversely affected as the ‘remain camp’ infers. In which case, would market research experience a negative impact from this? Without the ties of the EU, the UK would be free to negotiate its own trading terms with all countries, potentially increasing trade relationships with some nations. In this scenario, Britain could become a greater player with some countries, ultimately seeing a rise in the number of multinational research studies involving the UK and UK agencies.

No one can truly predict what will happen at the polls on 23 June 2016, nor can we know precisely how the economy will be affected if we leave the EU. However, market research agencies, such as Sagitta, will be busy interviewing the public in the lead-up to the referendum, as newspapers and media agencies seek information about the voting intentions of the British public.

If you would like to find out more about exit surveys, street interviewing, focus groups or other services we offer, please e-mail us at or please call us on +44 (0)1303 262259.

1. Although not a nation state, Greenland (one of Denmark’s territories) left the EU in 1985.
Norway is in the European Economic Area (EEA), which allows them to remain within Europe’s single market. It was established in 1992 as a ‘waiting room’ prior to joining the EU. Switzerland, although not part of the EU or EEA, is a member of the European Free Trade Association (EFTA), along with Norway, Liechtenstein and Iceland; and it has agreed treaties, which effectively mean Swiss nationals also have the right to live and work elsewhere in Europe.

Translating for multi-country research projects

By Amanda Johnston, Sagitta Research and D.Code Translations

In a previous blog post we wrote about some important factors for ensuring the success of your multi-country product testing project. This includes providing accurate translations of the recruitment screener, questionnaire and other materials such as showcards.

At Sagitta, we are often presented with questionnaires that have already been translated into English by the client. In this case we always conduct a thorough proofreading to be certain that they will be understood by respondents in the UK, and this is an integral part of the service we offer. It is notable that questions are sometimes worded in a way that would seem unnatural or awkward to a native speaker. Since we do not want the meaning to be lost, it is important to be clear. Another frequently encountered issue is that response options for closed questions may not be clear or accurate with regard to the question asked; I will look at this in more detail below.

It may seem obvious to state that the questionnaire should be fully comprehensible for a respondent in a specific market, and this includes an element of ‘localisation’, which is best provided by a native speaker translator who is resident in the country in question.

Localisation encompasses some seemingly trivial elements that may nevertheless be slightly irritating for the recipient of the information if they are poorly adapted. Besides such components as currency conversion and formatting for pricing questions, for example, it is also advisable to alter date and time formats (for appointments, say, if placing products). Different markets may also have various approaches to the introduction, especially since on-street / in-venue recruitment proceeds via the ‘cold’ method of intercepting people who are going about their daily activities. Perhaps the introduction needs to be kept as brief as possible for this reason. Or maybe a slightly longer explanation is required, stating the name of the company conducting the fieldwork rather than the commissioning agency. A more or less detailed explanation of the reason for approaching them and the purpose of the research may be necessary: too long and the respondent’s interest may be lost; too brief and they may feel suspicious that it is a selling opportunity. The translator / research agency in that country is generally able to gauge the best solution with regard to these aspects, and should be permitted some freedom in doing so.

Looking now at the finer details of translating questionnaires and survey materials, besides producing an accurate translation of the original version of course, a good translator will also think as a respondent to ensure that the questionnaire is clear and easy to understand. For example, for a questionnaire about breakfast cereals, with detailed questions on texture, flavour, aftertaste, etc., the closed question responses need to be translated with some insight, to find terms that respondents in that country would naturally associate with this category of products.

Going back to the matter of proofreading a questionnaire in English submitted to us by a client, this may have been written in-house in that language, or it may have been translated from an original version. In the latter case, it is useful to compare it to that original. In any event, if something is not clear we always query it. For a project we were conducting on behalf of another agency in Europe, to test cosmetics, it was apparent when proofreading the questionnaire that several translated terms would not have had any meaning to a respondent in the UK in relation to the type of product being tested. After some discussion with the client, and even some exchanging of images to aid the description, we were able to find accurate wording to convey the meaning. To ignore such inaccuracies in a questionnaire not only leads to frustration for the respondent (and interviewer), but it could result in a failure to collect precise responses and data for those questions, and might affect any comparison with other markets. Eliminating such anomalies at this point is therefore of vital importance.

D.Code Translations specialises in translations for market research, and works closely with Sagitta to ensure the high quality of research materials deployed for all projects. We can also count several other market research agencies in Europe among our clients, and are highly experienced in a range of European language combinations in this sector.

For further information, please contact:

A day in the life of a market research intern – Preparing a project for fieldwork

By Theo, Sagitta Research

As I have discovered since I’ve been working as a trainee at Sagitta, there is quite a lot involved in sending out all the work for a product testing project. The project manager is responsible for the overall planning but they need help to prepare what can be a physical and time-consuming process. This is part of my job.

Example: we were recently testing products and some of them had to be re-packaged into blank packaging. This turned out to be many hours of repetitive work but it was an important part of the instructions for the testing. Sometimes products have to be re-labelled with a code, numbers or letters to identify them for the fieldwork. This is the case when the products to be tested have to be rotated or tested in certain combinations. It can take a lot of time to organise the products and label them. It’s not the most interesting part of the job but it’s vital to follow the instructions carefully and keep track of what you’re doing. Fortunately, I am methodical by nature and also never mind physical work to take a break from sitting at the PC. And carrying the boxes of products from / to the place where they are dropped off / collected by the courier requires good physical strength!

So what’s involved in sending out a project for the fieldwork? Here is the process in steps:

– The project manager decides with the client upon the regions / areas where the work is to be conducted, and they book it out via the area supervisors.
– I have to complete some of the forms sent to the supervisors / interviewers, such as booking forms with the job details, instructions and quota sheets. The project manager works out the quotas and explains them to me. The project manager checks the forms.
– The products have to be identified, labelled if necessary, sorted and packed for despatch to the field. Packaging is really important so that the products are secure when being transported. We often have to order suitable boxes in advance, especially for larger products.
– The project manager usually organises couriers, but I can do this in some cases. If they are being posted then they are taken to the post office and we have to choose the right postage option for despatch.

Next time I will write about the ‘theory’ side of my apprenticeship, which is in Business Administration and Marketing. This will include some of what is involved – assignments and exams – and how it ties in with the day-to-day work I do.

A day in the life of a market research intern – Hall tests

By Theo, Sagitta Research

In this blog I will be writing about my experience as a trainee when working on a hall test and I will describe how a hall test works and its purpose.

A hall test (also known as a central location test) may sound like a straightforward way of collecting data; in fact, I found it to be quite a long and quite drawn-out process involving a great deal of preparation. The project manager has to brief the supervisor, who in turn has to explain everything carefully to the recruiters and interviewers. Their job is to then ensure the right people are recruited and that these respondents follow the questionnaire instructions carefully. Despite this, however, it is one of the most effective ways of researching the views of consumers. A typical hall test involves testing a product or concept on the general public. It usually takes place in a hired venue, which features a large hall and, if testing a food or beverage, a kitchen to prepare the product.

Various people are involved in a hall test. Apart from the respondent, there is:

1. The project manager, who liaises with the client and manages the project from head office. As a research executive, I also assist the project manager, helping with administrative duties such as printing questionnaires, booking venues etc. Often our project manager will attend the hall test to help ensure everything is managed according to the client’s requirements and to carry out a briefing where necessary.

2. The supervisor, who is primarily responsible for overseeing the project on the day (at the venue). At the hall test I recently attended, the supervisor was responsible for the initial briefing of interviewers, and also kept count of the number of interviews that were completed during the day and ensured quotas were filled. The supervisor also made sure that everything was conducted correctly and that the process ran smoothly.

3. The recruiters are vital to the success of a hall test, as they are solely responsible for approaching specific members of the public and inviting them in to take part in the hall test. A recruiter has to have the necessary skills to identify who may be eligible to take part, and must have a friendly disposition so that the respondent feels happy to participate.

4. The interviewers, whose role is to brief the respondents and guide them through a questionnaire.

5. The kitchen staff, who may have a more important role than some might assume. They must prepare the product quickly and efficiently, and according to the client’s specifications, for the respondent to consume while keeping a cool head. Hygiene is also very important of course.

6. A quality controller, who checks that the questionnaires have been completed accurately. This facilitates the job of the data processor / analyst and, of course, ensures correct data.

Best Practice Guide: International product testing – ensuring your research is a success!

By Helen Lester, Sagitta Research

Previously we considered why Product Testing Research is more important than ever before (see blog ‘The importance of product testing research in our e-society’). This month we look at what steps you need to take to ensure multi-country studies are a success.

There are a number of key steps that should be taken when carrying out international product testing research which, if you don’t plan them, can quickly grow into hurdles that then have to be overcome. At the very least, this could mean your project may overrun and/or cost more, and at worst, the validity of your data could be compromised.

Market Profile
Once you have established your research objectives and decided which countries you want to survey, you have to consider your target market in each country. Indeed, a product may have a different customer profile in the US to what it has in some European countries; perhaps your customers tend to be older or younger in one country for instance? When setting demographic quotas, it is therefore important to consider the market profile of each individual country you are researching, rather than simply setting quotas based on one country and assuming they are appropriate for all. Indeed, an alcoholic beer company may sell undistilled alcoholic beverages to 16 year olds in Germany and Belgium (where the minimum legal age for drinking beer is 16) whereas in the US, the age is likely to be higher (given the minimum legal age is 21). Consequently, due attention needs to be given to not only who is eligible to take part in the survey in each country, but also the individual quotas which will be set for each country. It is likely you will want to make direct comparisons between specific cells (e.g. age groups), in which case it is imperative you interview sufficient numbers in each cell to ensure that statistical comparisons can be made. In addition, you may need to structure the quota groups in such a way that meaningful comparisons can be made between countries. For example, in the case of beer drinkers, you may require additional quotas (16-20 years and 21-25 years, rather than 16-25 years) to take account of Germany and Belgium, where the minimum drinking age is lower than in countries such as the US (where you may set a quota group of 21-25 years). This will then enable you to evaluate differences by age as well as the potential effect of approaches to legal drinking at a slightly later life stage.

Fieldwork Period
The next step to consider is the timing of your research project. Ideally, fieldwork would take place at the same time in all countries. Apart from consistency, it means the research can be processed and analysed in a timely manner. However, if so, it is important to take into account country-specific biases which may occur. For example, staying with the beverage theme, in an annual report, Carlsberg refers to the fact that beer consumption is affected not only by demographics, but also by seasonality, weather and a whole host of different factors. Given this, if carrying out a multinational beer study in Australia and the UK, you may like to establish a fieldwork period which is not at the height of the summer/winter season in either country in case it affects a respondent’s perception of the product they are testing.

The questionnaire and other survey materials (e.g. interviewer instructions, showcards, etc.) must be properly finalised before sending them for translation. We recommend commissioning our own in-house translation agency – D.Code Translations – as they use native speaking translators and also provide a proofing service within the cost of the translation (so in effect two linguists are involved in producing each translation). Irrespective of which translation agency you use, you need to allocate sufficient time in your study schedule for translations – in terms of translating them and also checking the document is an accurate translation of the original. If it is not, you will not be able to make reliable cross-country comparisons.

Data Processing
When processing the data, it is imperative the data are entered in the same way for each country. We would advise pre allocating set data positions (column numbers) on the English questionnaire (including a code for ‘country’ of course!), so all translations include the same information. Whilst the translations need to be checked to ensure none of the data position labelling has been lost / altered during the translation stage, it saves time and removes the chance of misaligned data later on.

Project Organisation
As you can see, project organisation is paramount when carrying out international product testing research. Apart from developing a detailed schedule, all fieldwork documents should be carefully scrutinised, as otherwise any mistakes will be multiplied across all the countries and languages involved – a costly error for a one-country study, let alone a multinational project.

We are experienced international researchers, so if you are planning some product testing research in multiple countries, give Sagitta MR a call. We have the understanding, professionalism and flexibility to ensure your project is a success!

A day in the life of a market research intern – Introduction

By Theo, Sagitta Research

I’m Theo and I am a Trainee Research Executive at Sagitta Market Research. I’ve been charged with the task of writing a regular blog to provide our readers with an insight into the day-to-day runnings of a market research agency.

Each month, I will focus on a specific aspect of managing a market research study. But first, a little about myself:

Having completed my three A-Levels within a year, I decided to embark upon an apprenticeship in Business Administration and Marketing. The apprenticeship is accredited by BPP, an independent training provider who manages and assesses apprenticeship schemes in the UK. I thought this would provide me with the opportunity to review which career path I wanted to ultimately take, as well as provide a useful qualification. It has always been an ambition of mine to start working as soon as possible and when this opportunity arose, it was one that I couldn’t refuse. I wanted to be based in a company where the work would be both varied and challenging; and in a business where I would learn about the fundamentals of business operation, as well as gain key office skills. I was fortunate to be recruited by local research agency, Sagitta Market Research, in May 2015.

I didn’t have any real understanding of what market research was until I worked for Sagitta Market Research. In the last quarter, however, I have come to appreciate that it is an interesting and diverse industry. In an average week (except there is no such thing as an average week in market research!), I could be despatching cosmetics for a product test, booking venues for a hall test, as well as printing questionnaires for interviewers and doing data entry. Sagitta Market Research works for both other market research agencies, as well as end clients. We cover a wide range of industries including: fragrance and cosmetics, farming, food and beverages, automotive, lifestyle and health – the list is endless. We do both quantitative and qualitative research too, therefore, I am learning about different aspects of market research every day. I can already see that in order to be a competent market researcher, you need to be organised, methodical, accurate, flexible and work well under pressure. Gaining these qualities will hopefully enhance my skills for the future.

In my next blog, I will go into more depth about the organisation and management of hall tests.

The importance of product testing research in our e-society

By Helen Lester, Sagitta Research

Product Testing research has never been more important. In our 24/7 ‘e-society’, where social media postings can help make or break a product and brand, it’s more critical than ever before that companies test their product before taking it to market. Consumers have so much choice now, both in terms of product access (with 24/7 online stores), but also with respect to the range of products available to them (due to increasing competition from Asia – especially China – and Eastern Europe). As a consequence, the modern shopper can be very impatient, intolerant and deal savvy. Therefore, manufacturers no longer have the luxury of developing their product ‘in the marketplace’, to ensure its success. Success is awarded to those who deliver a winning product from the outset – in other words, to those who get their product right first-time. Whether it be an expensive washing machine or low-cost lipstick, consumers don’t have the patience for second chances – they don’t want to waste their time or money.

Free ‘advocacy advertising’ (whereby a consumer recommends a product without being asked) is far more common with the prolific use of social media networks such as Facebook, Twitter and Tumblr. A quick tweet to say how amazing the latest Apple gadget is can send followers into a frenzy and provide more trustworthy publicity than any traditional advertising campaign could achieve. To the contrary, a score of less than 7/10 on review forums such as Reevoo, can cause significant damage to a product or brand, not previously witnessed before the worldwide web.

Pre internet, an unresearched product could cost a company money in a variety of ways, for example the wrong demographic group being targeted. But companies often had the possibility of a hasty re-launch with a tweaked advertising campaign or modified product. However, this option is not so easy now. In our e-society, the stakes are much higher. Failure to conduct comprehensive product testing research could lead to the ultimate failure of your entire brand or a multi-million pound PR disaster – and with a click of a button the whole world will know about it before you have time to react.

Even the most experienced and slickest of brands are not immune – just look at Coca Cola’s failed product launch in the UK for its bottled water Dasani. One has to question how extensive its product testing research was in the UK, when it used the slogan “Bottled spunk” to describe the drink to the UK market. The slogan had delivered the company much success in the US. However, just one focus group should have highlighted the alternative meaning in British English.

So if you have a product to take to market, save yourself any potential product launch failure and instead maximise your brand’s potential by carrying out product testing research first. Can you really afford not to?

Why did the election polls get it wrong…or did they?
By Helen Lester, Sagitta Research Consultant and former MORI Pollster

Following the UK’s general election on Thursday 7 May 2015, the media has been awash with news stories about how the opinion polls ‘got it completely wrong’, but did they in actual fact?

The Ipsos MORI / GFK NOP exit poll for the BBC / ITV News / Sky News was far from ‘completely wrong’, correctly predicting both the Liberal Democrats’ collapse, as well as the contrasting SNP cataclysm. The exit polls also showed that the Conservatives would win the most seats (albeit suggesting they would be 10 seats short of a majority, rather than wining outright as they did). In reality, the results from the polling station exit interviews were very close to what actually happened, especially when you take into account margin of error. As the chart below demonstrates, the predicted number of Conservative seats was only incorrect by 2% and the Labour ones by -1%, with all other parties’ seats being accurate in percentage terms, albeit the individual seat numbers being slightly different. Of course, when it comes to parliamentary seats, 2% can (and did) make the significant difference between an outright win or a hung parliament.

Actual Seats 2010-2015 Exit Poll Seats % of Total Seats – Exit Poll Actual Seats 2015-2020 % of Total Seats – Actual % Difference
CON 307 316 49% 331 51% 2%
LAB 258 239 37% 232 36% -1%
SNP 6 58 9% 56 9% 0%
LIB DEM 57 10 2% 8 1% 0%
P.CYMRU 3 4 1% 3 0% 0%
UKIP 0 2 0% 1 0% 0%
GREEN 1 2 0% 1 0% 0%
OTHER 18 19 3% 18 3% 0%

I accept that the pre-election opinion polls did not so accurately forecast the end result (saying it was too close to call between Labour and Conservatives), but did they reflect the opinions of the nation at that point in time? In other words, were they really incorrect? After all, a market research survey is not a crystal ball, predicting future actions. It’s a snapshot of peoples’ opinions at the time they were interviewed. Yes, researchers can use modelling techniques to try and extrapolate the likely result. However, modelling is ultimately using the information provided by respondents and is reliant on much interpretation. What I am more interested in is whether the respondents’ results at the heart of the survey, the core research, were true. Moreover, the repeated ‘quick and dirty’ polls often commissioned by newspapers ahead of the election are only after attention-grabbing headline results. The media are not necessarily willing to pay (or wait) for sophisticated modelling analysis – in fact, do journalists even want that, is it not perhaps more exciting to say “it’s too close to call!”?!

The British Polling Council is undertaking an independent enquiry to investigate why the final pre-election polls, published on the eve of the election, were inaccurate. The main polling companies have already offered their own varied explanations about what the media would call a ‘debacle’. Sampling error is one possibility of course, as is a late swing to the right (voters deciding on the day to vote Tory). Indeed, Ipsos MORI’s final poll for the Evening Standard on 5-6 May reported that one fifth (21%) may change their mind on which party they would vote for on 8th May. However, whilst both of these could be true, I think there is another major factor at play – shame, or the ‘closet voters effect’ as Nigel Farage stated in reference to his own party’s supporters. In other words, I think pre-election, many people were too embarrassed or even ashamed, to say they would be voting Tory, even if they had already decided they would. If this is true, why? Why would a Conservative voter be less likely to outwardly support their party than a Labour supporter? As widely reported, Labour supporters were much more vocal on Twitter than their right-wing counterparts. Although Twitter users are not representative of the voting population, it’s interesting to think about the consequence this may have as more and more tweets were shared during the campaign and ultimately on polling day. Among my own Facebook group of friends and family (which clearly is a small sample and not necessarily representative of the British public), I have noted that the left-wingers have been very vocal in sharing their political views, both leading up to the election, on polling day and afterwards. On the contrary, the right-wingers have appeared more reserved and perhaps even ‘ashamed’ about voicing their political stance. If this is the case, and Conservative voters were just too embarrassed to admit they were going to vote Tory in the pre-election opinion polls, what changed on the day of the election to make the exit polls more accurate? I think something changes when someone leaves a polling station. Call it ‘polling pride’ perhaps. In other words, I think that when you ask someone coming out of a polling station who they have just voted for, there is a temporary euphoria and positivity surrounding their vote, they are proud to have taken part in shaping our country’s future. They have conviction about where they put their “X” and are happy to divulge it. These closet Conservatives believed a blue government would be the more secure option for our country. However, fast forward 24 hours, and as they return to their Facebook and Twitter accounts, television and online news reports, work, pub etc, they, like me, may have witnessed an onslaught of anti-conservatism and retreated back to their closets!