I talked with @Jkatz (WMF) the other day, and wanted to make sure that an idea from that conversation, got captured here. A big part of the problem with the way that the Article feedback tool collected data, was that there was no good ways to prioritize and decided if feedback was in fact acitonable or usable. A system could provide canned feedback that validates the need for that information from multiple user interactions, and could be focused on pushing content in the backlogs for some of the inline templates (https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Inline_Templates ) or even the section templates ( https://en.wikipedia.org/wiki/Wikipedia:Template_messages/Section). There are good precedents for these kinds of workflows coming out of Zooniverse and a couple other projects -- where queries/concerns/questions from the crowd get double, triple, or etc checked, and then only labeled "real" at a certain level of confidence from the crowd.
@Pginer-WMF Also, it might be worth taking a look at #AskWikipedia conversation started by liam at https://www.facebook.com/groups/511418892316698/ . There is a lot of room for developing some kind of socially integrated engagement that would make the editors more human, without creating whole new queues (you would have to scale the deployment to the number of people working in that space).