Human languages are natural forms of big data. Statistical language models form key components of many human language technology applications including speech recognition, machine translation, natural language processing, human computer interaction, language learning and handwriting recognition. A central part of language modelling research is to appropriately model long-distance context dependencies. In recent years deep learning based language modelling techniques are becoming increasingly popular due to their strong generalization performanc and inherent power in modelling sequence data. The application of deep learning techniques to speech and language processing also opened up a number of key research challenges. The computational cost incurred in training and evaluation significantly limits their scalability and the number of possible application areas. In order to address theseissues, This project aims to significantly improve the efficiency and performance of recurrent neural network based deep language modelling approaches on large data sets.