The music signal comprises of different features like rhythm, timbre, melody, harmony. Its impact on the human brain has been an exciting research topic for the past several decades. Electroencephalography (EEG) signal enables the non-invasive measurement of brain activity. Leveraging the recent advancements in deep learning, we proposed a novel approach for song identification using a Convolution Neural network given the electroencephalography (EEG) responses. We recorded the EEG signals from a group of 20 participants while listening to a set of 12 song clips, each of approximately 2 minutes, that were presented in random order. The repeating nature of Music is captured by a data slicing approach considering brain signals of 1 second duration as representative of each song clip. More specifically, we predict the song corresponding to one second of EEG data when given as input rather than a complete two-minute response. We have also discussed pre-processing steps to handle large dimensions of a dataset and various CNN architectures. For all the experiments, we have considered each participant's EEG response for each song in both train and test data. We have obtained 84.96% accuracy for the same at 0.3 train-test split ratio. Moreover, our model gave commendable results as compare to chance level probability when trained on only 10% of the total dataset. The performance observed gives appropriate implication towards the notion that listening to a song creates specific patterns in the brain, and these patterns vary from person to person.