In many tech conferences, attendees are invited to rate the talk and/or the speaker from 1 to 5 stars. This type of ratings is interesting but has a few drawbacks.
The discussion started as a twitter thread with this french proposition.
-
As a speaker, if you get a 1/5, you don’t always know why (and what to improve).
-
As a speaker, if you get 5/5, you may think the audience liked the talk for the demos while they actually liked the diagrams explanations.
-
As an attendee, you’re looking for talks to watch on YouTube. How do you chose between 10 talks that have a 4.5/5 rate?
-
As a conf organizer, how do you know why this speaker/talk was liked or not?
-
…
A different approach would be to provide (as an app/website) a bingo grid with ready to use feedbacks. Examples in english would be:
-
I learn something
-
Too fast
-
Very interesting
-
FUN!
-
I loved the demos
-
Hard to understand
-
A bit boring
-
I understood absolutely nothing
-
Not deep enough
-
Not enough demos/examples
-
Too complicated
-
Best talk ever
-
…
And the list goes on.
I think each conference could come up with it’s own grid (variation on language obviously but also according to types of talks).
-
Would a 3x3 grid or 4x4 be enough?
-
Could we allow the speaker to provide a complimentary row of feedbacks like:
-
I liked the first part about X
-
The demo about Y was not very production ready
-
The slide deck was ugly
-
-
How can we create some kind of hall of fame? (most interesting talk, best demos…)
-
Should we try to elaborate a rate from the grid?
What do you think?
As a speaker, it looks like a great and powerful tool.
We must keep it simple. Too few attendees give one single rating, so one or more 4x4 bingos... (conference organizers should gamify the rating in order to increase participation)