As speakers contribute with their utterances (speech), we save the transcribed text associated with each slide. In addition, we have a bunch of sensor data that we can associate with the slide (see "this is intense badge"). This way we can generate a lot of metadata about slides (or image URLs).
This data can either be used to reinforce certain slides for future presentations (ML), and give the user feedback how well they did compared to other presenters. We could also potentially sell data to external sources (this requires some legality review).
Future plans involves rating sentiment by background laughter from the audience associated with the slide. In addition, a general feedback from the user to rate the user experience.