Modern neuroscience has shown that the brain is profoundly rhythmic and that frequencies of neural rhythms are responsive to frequencies of musical rhythms. We collected Electroencephalography (EEG) response on 12 naturalistic music stimuli (songs), from 20 participants. We retrieved the tempo and its sub-harmonics from our stimuli (songs), and further used this information to predict the beats in the brain response using Machine Learning techniques. We observed a hierarchy of beats in each of the songs, with a specific beat frequency to be dominant (i.e. higher in magnitude) than others. This led us to form three groups of songs and their brain responses, with each of the groups indicating the frequency of a beat that dominated in the hierarchy of beat structure of that song. We used small segments of 1, 3 and 5 seconds of brain responses, rather than the entire song duration. We further created two sets for classification of the three groups of brain responses and utilized two spatial filtering techniques: Mean across electrodes (ME) and first principal component (PC1), and a Dense method using data from all electrodes. This was followed by feature extraction using band power. We developed univariate and multivariate models for classification to demonstrate the significance of each frequency band which represent beat frequencies. The dense method outperformed ME and PC1. Features related to eighth note generated maximum discrimination between classes. We also observed a positive correlation between window length and rate of correct prediction. Accuracy from one second to five seconds window improved significantly in both the sets. We achieved maximum accuracy of 70% and 56% accuracies for binary and ternary classification respectively, which is 20% above chance-level accuracy. Random Forest and kNN performed better than SVM. This work contributes to the growing body of knowledge to understand the underlying neural mechanism of rhythm processing in the brain.