Author Archive

MRX and technology – on the road and cautiously edging forward

caution-jam-1652339_1920The belief is that technology moves fast. Our annual survey of technology in the market research industry – done in partnership with FocusVision – is released on June 8th and offers an alternative view. As in previous years, it shows an industry on the move – but rather than traveling light with laptop and a few essentials, and heading for the high-speed train, MRX is found taking to the road in a mixture of cars, vans and trailers, as there is so much luggage everyone needs to take with them.

The resourceful and the lucky get away early to speed along an almost empty highway; for the rest, progress is of the crowded highway type, as some lanes inexplicably slow to near stationary, while others keep edging forward. If only you knew in advance which one to pick!

Our survey has been tracking the progress of market research in adapting to new technologies to collect, process and present data for 13 years now. Considerable change can be seen – the line-up of data collection methods in 2016 would be unrecognisable to someone in 2004. But most areas we look at in depth – whether it is mobile research for collecting data, or the long-assumed decline of CATI, or the rise of data visualization and dashboards – move slower than you might expect if you listen to the buzz in the industry.

The same is true of the players. We tend to find that the large firms are further ahead with new techniques and innovation – their size and greater means let them risk the odd detour. But we often also detect pockets of innovation among the smaller firms too. This year, for example, we found that large and small firms alike are neck-and-neck in the extent to which they are incorporating analytics based around Big Data into their reporting. We also uncovered another success story for the industry around how it has taken to storytelling across the board.

Ours is rightly a cautious industry. Much of that baggage is there because it is needed. But we hope our annual survey also lets research companies examine how they are doing in relation to their technology, and check their own direction of travel. We also hope it might stimulate some frank discussion about just what gets taken on the journey, and what needs to be put out for recycling instead.

Conference Report: Challenges of Automation, ASC, May 2017

technology-2025795_640Tim Macer reports on ASC’s May 2017 one-day conference in London: “Satisfaction Guaranteed? The Challenges of Automation in Survey Research”

Colin Strong, Ipsos MORI, in the conference keynote, said that techniques and technology developed in the 20th Century have brought about a certain view as to how humans behave. These also form the assumptions behind artificial intelligence. Here, he notes an interesting paradox: “Underneath these AI apps – behind the scene you have a whole bunch of humans. Humans pretending to be machines, with machines pretending to be humans. What is going on?” he asked.

We still need people

Strong’s answer is that as AI techniques become sophisticated and advanced, the need for humans to keep it meaningful, and identify what makes sense is intensified. He notes that in parallel with our tendency to project human attributes onto technology, we have also been trying to project machine qualities onto humans – behavioural models being one obvious example. He predicts that human skills will be very much in demand as AI advances. He tips Process Optimiser as the go-to profession of the next decade, after Data Scientist.

Established jobs will go, Strong predicts, but “different jobs, not fewer jobs” will emerge, as they have before, with each major technological revolution of the past. He has no doubt that computers will pass the Turing Test, and in some ways already do. Yet that, he sees, is also because we are becoming a “bit more machine like” – and it is that, and not the mass unemployment that some fear, which he predicts will pose more fundamental political and social challenges in the future.

Rolling the R in Research

A glimpse of the changing skills and jobs in research emerged from the two speakers that followed. Ian Roberts from Nebu championed R, the open source statistical package that’s a favourite among academic researchers, as a practical way to automate research processes. Nebu had used it to create a self-regulating system that worked out how to optimise despatching survey invitations.

Roberts considers R especially suitable because of the access it provides to a comprehensive library of routines to perform the kind of machine-learning modelling that can monitor the performance of survey invitations to different subpopulations, or by different distribution channels such as email and SMS – and then deliver subsequent invitations in ways that will achieve the best completion rates. In the case Roberts described, as the system was able to learn, the need for human supervision was reduced from one person’s constant involvement, to tactical monitoring for a few hours per week.

“Without learning [within your system] you will never get to the next stage of how do you improve what you are doing”, he said.

Watching what those pesky processes get up to

John McConnell from Knowledge Navigators presented a paper by Dale Chant, Red Centre Software in which R also made an appearance, alongside Red Centre’s Ruby platform as a vehicle for automating many of the interrelated processes involved in running a large-scale tracking study, from sampling through to extracting and reporting the data.

Chant categorised automation tasks into three levels, from ‘micro’ for a single process, through ‘midi’ in which several micro-automations are consolidated into one, to ‘macro’ where there are typically many decision points that straddle a number of different processes. The risk in automation is in creating black boxes, said McConnell, where ‘a broken process can run amok and do real damage.

The key to success McConnell and Chant advocate is in exposing decision points within the system to human supervision. Echoing Colin Strong’s earlier warnings on not seeking to eliminate people altogether but instead to apply them to sense-making, Chant reported that whenever he had been tempted to skimp on the manual checking on the decision points he builds in, he has always come to regret it.  According to McConnell, the risks are low with micro-automation, as that is what most tools currently do very successfully. But when moving up the scale of integration, and bringing disparate processes together, the risks magnify. “Here, you have to think of quality assurance – and crucially about the people side, the staff and the skills”, said McConnell.

Lose the linear: get adaptive

Two papers looked at automating within the survey over what questions to serve to participants. The first, jointly presented by Jérôme Sopoçko from the research software provider Askia and Chris Davison from research consultancy KPMG Nunwood, mused over the benefits of building adaptive surveys. Sopoçko asserts these are made more feasible now thanks to the APIs (software-to-software interfaces) in most of today’s data collection platforms that allow them to reach out to open source routines that can perform, for example, text translation, or sentiment analysis in real time, and then determine where the interview goes next.

Davison welcomed the opportunity to break out of linear surveys by starting with open, unstructured questions and then applying text analytics in real time to interpret the result and select from a pool of predefined questions “to ask the next most appropriate question for [the participant].” He continued: “It starts to challenge that traditional paradigm. It can also help with trust. We cannot know how long that survey will take for that individual – if you put the most relevant questions first you can actually stop at 10 minutes. This has to be better than simply putting more traditional surveys into a new environment.”

…and get chatting too – without being derogatory

Simon Neve from software provider FusionSoft, described how he has put a similar model into practice in customer satisfaction surveys, based on sentiment analysis performed on the fly on verbatim questions about service experience. This then allows for the software to probe selectively, so that initial responses that would otherwise be ambiguous or impossible to interpret are clarified and made intelligible. The aim is to provide a survey which appears as a conversation with a Chabot – not a human, though. Neve said: “Our learning is the more context you can provide the better experience you can provide to the respondent.”

However, automated interpretation has its limitations. Software or survey designers need to be vigilant for irony, and to be especially careful when repeating back anything if the response turns into something of a rant. “You have to be careful about playing back derogatory comments,” Neve cautioned. “We have a list of 2,500 derogatory words to avoid.” But he also quipped: “If you also put an emoji up you with your probe, you are immediately forgiven for anything you say.”

2013 Technology Survey Results Released

The results of our annual survey are released. It’s a special year this year, as it marks the tenth anniversary of the survey. We celebrated by asking a few special questions asking people to cast their minds back over the last ten years, and then to sweep forward another ten years.

We also explored these areas of topical interest :

• Smartphone and Tablet devices in research,
• Text analysis and coding,
• Voice of the Customer (VoC) and Customer Experience Management (CEM).

Plus the report contains the customary range of tracking data, in some cases going back nine or ten years, on key metrics relating to research technology and its use across the industry.

  • To download this content, please login or register for free. Reports and other downloads are only provided to registered members.

For 2013, the survey interviewed 240 market research companies in 35 different countries, selecting individuals who are responsible for, influential over, or aware of technology decisions within their company. The sample is drawn to ensure representation of three global regions: North America, Europe and Asia Pacific, balanced to represent the relative amount of research carried in these regions, according to figures published by ESOMAR.

We are extremely grateful to all those companies and individuals who took the trouble and time to contribute to the 2010 survey.

You can also find all the nine previous annual reports in our library of reports.

Tell us what you think!

Leave us a comment to tell us about the findings that most interested or most surprised you. (You need to be logged in to make a comment).