Conference Report: Challenges of Automation, ASC, May 2017

technology-2025795_640Tim Macer reports on ASC’s May 2017 one-day conference in London: “Satisfaction Guaranteed? The Challenges of Automation in Survey Research”

Colin Strong, Ipsos MORI, in the conference keynote, said that techniques and technology developed in the 20th Century have brought about a certain view as to how humans behave. These also form the assumptions behind artificial intelligence. Here, he notes an interesting paradox: “Underneath these AI apps – behind the scene you have a whole bunch of humans. Humans pretending to be machines, with machines pretending to be humans. What is going on?” he asked.

We still need people

Strong’s answer is that as AI techniques become sophisticated and advanced, the need for humans to keep it meaningful, and identify what makes sense is intensified. He notes that in parallel with our tendency to project human attributes onto technology, we have also been trying to project machine qualities onto humans – behavioural models being one obvious example. He predicts that human skills will be very much in demand as AI advances. He tips Process Optimiser as the go-to profession of the next decade, after Data Scientist.

Established jobs will go, Strong predicts, but “different jobs, not fewer jobs” will emerge, as they have before, with each major technological revolution of the past. He has no doubt that computers will pass the Turing Test, and in some ways already do. Yet that, he sees, is also because we are becoming a “bit more machine like” – and it is that, and not the mass unemployment that some fear, which he predicts will pose more fundamental political and social challenges in the future.

Rolling the R in Research

A glimpse of the changing skills and jobs in research emerged from the two speakers that followed. Ian Roberts from Nebu championed R, the open source statistical package that’s a favourite among academic researchers, as a practical way to automate research processes. Nebu had used it to create a self-regulating system that worked out how to optimise despatching survey invitations.

Roberts considers R especially suitable because of the access it provides to a comprehensive library of routines to perform the kind of machine-learning modelling that can monitor the performance of survey invitations to different subpopulations, or by different distribution channels such as email and SMS – and then deliver subsequent invitations in ways that will achieve the best completion rates. In the case Roberts described, as the system was able to learn, the need for human supervision was reduced from one person’s constant involvement, to tactical monitoring for a few hours per week.

“Without learning [within your system] you will never get to the next stage of how do you improve what you are doing”, he said.

Watching what those pesky processes get up to

John McConnell from Knowledge Navigators presented a paper by Dale Chant, Red Centre Software in which R also made an appearance, alongside Red Centre’s Ruby platform as a vehicle for automating many of the interrelated processes involved in running a large-scale tracking study, from sampling through to extracting and reporting the data.

Chant categorised automation tasks into three levels, from ‘micro’ for a single process, through ‘midi’ in which several micro-automations are consolidated into one, to ‘macro’ where there are typically many decision points that straddle a number of different processes. The risk in automation is in creating black boxes, said McConnell, where ‘a broken process can run amok and do real damage.

The key to success McConnell and Chant advocate is in exposing decision points within the system to human supervision. Echoing Colin Strong’s earlier warnings on not seeking to eliminate people altogether but instead to apply them to sense-making, Chant reported that whenever he had been tempted to skimp on the manual checking on the decision points he builds in, he has always come to regret it.  According to McConnell, the risks are low with micro-automation, as that is what most tools currently do very successfully. But when moving up the scale of integration, and bringing disparate processes together, the risks magnify. “Here, you have to think of quality assurance – and crucially about the people side, the staff and the skills”, said McConnell.

Lose the linear: get adaptive

Two papers looked at automating within the survey over what questions to serve to participants. The first, jointly presented by Jérôme Sopoçko from the research software provider Askia and Chris Davison from research consultancy KPMG Nunwood, mused over the benefits of building adaptive surveys. Sopoçko asserts these are made more feasible now thanks to the APIs (software-to-software interfaces) in most of today’s data collection platforms that allow them to reach out to open source routines that can perform, for example, text translation, or sentiment analysis in real time, and then determine where the interview goes next.

Davison welcomed the opportunity to break out of linear surveys by starting with open, unstructured questions and then applying text analytics in real time to interpret the result and select from a pool of predefined questions “to ask the next most appropriate question for [the participant].” He continued: “It starts to challenge that traditional paradigm. It can also help with trust. We cannot know how long that survey will take for that individual – if you put the most relevant questions first you can actually stop at 10 minutes. This has to be better than simply putting more traditional surveys into a new environment.”

…and get chatting too – without being derogatory

Simon Neve from software provider FusionSoft, described how he has put a similar model into practice in customer satisfaction surveys, based on sentiment analysis performed on the fly on verbatim questions about service experience. This then allows for the software to probe selectively, so that initial responses that would otherwise be ambiguous or impossible to interpret are clarified and made intelligible. The aim is to provide a survey which appears as a conversation with a Chabot – not a human, though. Neve said: “Our learning is the more context you can provide the better experience you can provide to the respondent.”

However, automated interpretation has its limitations. Software or survey designers need to be vigilant for irony, and to be especially careful when repeating back anything if the response turns into something of a rant. “You have to be careful about playing back derogatory comments,” Neve cautioned. “We have a list of 2,500 derogatory words to avoid.” But he also quipped: “If you also put an emoji up you with your probe, you are immediately forgiven for anything you say.”

Tim Macer

Tim is a world-renowned specialist in the application of technology in the field of market and opinion research and is probably the most widely-published writer in the field. His roots are in data analysis, programming, training and technical writing. These days, as principal at meaning he works with researchers, users of research data and technology providers around the globe, as an independent advisor. He is quite passionate about improving the research process and empowering people through better use of technology.

Leave a Reply

You must be logged in to post a comment.