True, but it’s hard to do meaningful analysis on freetext fields. This is a tradeoff in every survey: while respondents like freetext, because it lets them express themselves completely, the results are often useless unless you are willing to put massive effort into hand-examination of the responses.
(I’m not sure what they’re planning here, but I would probably take these keywords and turn them into a tag cloud, which gives at least a comprehensible result without too much effort.)
This is true, but it could be worth it. In our internal surveys at work, it takes about 1 hour to go through 100ish free-form responses and tag them, with the advantage that you can always go back from the “tag cloud” to drill into individual responses to get a better flavor of what people’s complaints are about. We end up with 100ish free-form responses for 4ish questions, and getting 4 people sitting around a table for an hour tagging isn’t too much effort.
If we have 1000s of responses it might take a few days to a few weeks of processing. Whether that is OK or not depends on your use case: if it’s the core data that drives your team’s yearly work prioritization process, 2 weeks to process user feedback is nothing. OTOH, not everyone can afford to take days/weeks manually churning through survey results!
I asked a friend of mine who used to be a CTO for an online surveys company if there are good solutions for handling “open questions” in surveys – potentially something NLP oriented – but alas he said he isn’t familiar with such solutions.
Maybe it’s worth publishing the raw data to allow people to analyze it themselves and publish their insight here.