High-quality elaborative peer feedback is a blessing for both learners and teachers. However, learners can experience difficulties in giving high-quality feedback on complex skills using textual analytic rubrics. High-quality elaborative feedback can be strengthened by adding video-modeling examples with embedded self-explanation prompts, turning textual analytic rubrics (TR) into so-called 'video-enhanced analytic rubrics' (VER). This study contrasts two experimental conditions (TR, n = 54; VERs, n = 49) with their version of the anonymized online tool (used to collect the given feedback in 'Tips for improvement and Tops identifying strengths'). Peer feedback quality (concreteness and consistency) was evaluated using Natural Language Processing. As expected, the video-enhanced rubrics condition resulted in a higher quantity of words used and a lower amount of naive wording compared to the textual rubric condition. Contrary to our assumptions, it did not lower the amount of non-constructive wording nor improved the amount of behavioral and process-related feedback. Possibly, the transition from providing more feedback to delivering more accurate behavioral and process-related feedback has not yet been made in the time set for the study.