Towards Achieving Machine Comprehension Using Deep Learning on Non-GPU Machines

Authors

  • U. Khan College of Computing and Information Sciences, PAF Karachi Institute of Economics and Technology, Pakistan
  • K. Khan College of Computing and Information Sciences, PAF Karachi Institute of Economics and Technology, Pakistan
  • F. Hassan Department of Computer and Information Sciences, Universiti Teknologi Petronas, Malaysia
  • A. Siddiqui Department of Computer Sciences, Sir Syed University of Engineering and Technology, Pakistan
  • M. Afaq College of Computing and Information Sciences, PAF Karachi Institute of Economics and Technology, Pakistan
Volume: 9 | Issue: 4 | Pages: 4423-4427 | August 2019 | https://doi.org/10.48084/etasr.2734

Abstract

Long efforts have been made to enable machines to understand human language. Nowadays such activities fall under the broad umbrella of machine comprehension. The results are optimistic due to the recent advancements in the field of machine learning. Deep learning promises to bring even better results but requires expensive and resource hungry hardware. In this paper, we demonstrate the use of deep learning in the context of machine comprehension by using non-GPU machines. Our results suggest that the good algorithm insight and detailed understanding of the dataset can help in getting meaningful results through deep learning even on non-GPU machines.

Keywords:

natural language processing, machine comprehension, deep learning, non-GPU machines, SQuAD

Downloads

Download data is not yet available.

References

D. Karunakaran, “Entity extraction using Deep Learning based on Guillaume Genthial work on NER”, available at: https://

medium.com/intro-to-artificial-intelligence/entity-extraction-using-deep-learning-8014acac6bb8, 2017

https://rajpurkar.github.io/SQuAD-explorer/

P. Rajpurkar, J. Zhang, K. Lopyrev, P. Liang, “SQuAD: 100,000+ Questions for Machine Comprehension of Text”, available at: https://arxiv.org/abs/1606.05250, 2016 DOI: https://doi.org/10.18653/v1/D16-1264

D. Chen, A. Fisch, J. Weston, A. Bordes, “Reading Wikipedia to Answer Open-Domain Questions”, 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, Canada, July 30-August 4, 2017 DOI: https://doi.org/10.18653/v1/P17-1171

E. Loper, S. Bird, “NLTK: The natural language toolkit”, ACL-02 Workshop on Effective tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics, Philadelphia, USA, July 7, 2002 DOI: https://doi.org/10.3115/1118108.1118117

M. Richardson, C. J. C. Burges, E. Renshaw, “MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text”, Conference on Empirical Methods in Natural Language Processing, Washington, USA, October 18-21, 2013

J. Weston, A. Bordes, S. Chopra, A. M. Rush, B. V. Merrienboer, A. Joulin, T. Mikolov, “Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks”, 4rth International Conference on Learning Representations, New York, USA, May 2-4, 2016

K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, P. Blunsom, “Teaching Machines to Read and Comprehend”, International Conference on Neural Information Processing Systems, Montreal, Canada, December 7-12, 2015

S. B. Kotsiantis, “Supervised machine learning: A review of classification techniques”, Informatica, Vol. 31, pp. 249-268, 2007

J. Schmidhuber, “Deep learning in neural networks: An overview”, Neural Networks, Vol. 61, pp. 85-117, 2015 DOI: https://doi.org/10.1016/j.neunet.2014.09.003

B. F. Green Jr, A. K. Wolf, C. Chomsky, K. Laughery, “Baseball: An Automatic Question-Answerer”, Western Joint IRE-AIEE-ACM Computer Conference , Los Angeles, California, May 9-11, 1961 DOI: https://doi.org/10.1145/1460690.1460714

M. Seo, A. Kembhavi, A. Farhadi, H. Hajishirzi, “Bidirectional Attention Flow for Machine Comprehension”, 5th International Conference on Learning Representations, Toulon, France, April 24-26, 2017

Y. LeCun, Y. Bengio, G. Hinton, “Deep Learning”, Nature, Vol. 521, Article ID 7553, 2015 DOI: https://doi.org/10.1038/nature14539

K. Greff, R. K. Srivastava, J. Koutnik, B. R. Steunebrink, J. Schmidhuber, “LSTM: A search space odyssey”, Transactions on Neural Networks and Learning Systems, Vol. 28, No. 10, pp. 2222-2232, 2017 DOI: https://doi.org/10.1109/TNNLS.2016.2582924

L. Yu, K. M. Hermann, P. Blundom, S. Pulman, “Deep learning for answer sentence selection”, available at: https://arxiv.org/pdf/

1632.pdf, 2014

A. Finch, Y. S. Hwang, E. Sumita, “Using machine translation evaluation techniques to determine sentence-level semantic equivalence”, Third International Workshop on Paraphrasing, Jeju Island, Korea October 14, 2005

K. Cho, B. V. Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, Y. Bengio, “Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation”, Conference on Empirical Methods in Natural Language Processing, Doha, Qatar, October 25-29, 2014 DOI: https://doi.org/10.3115/v1/D14-1179

J, Pennington, R. Socher, C. D. Manning, “GloVe: Global Vectors for Word Representation”, Conference on Empirical Methods in Natural Language Processing Doha, Qatar, October 25-29, 2014 DOI: https://doi.org/10.3115/v1/D14-1162

Downloads

How to Cite

[1]
U. Khan, K. Khan, F. Hassan, A. Siddiqui, and M. Afaq, “Towards Achieving Machine Comprehension Using Deep Learning on Non-GPU Machines”, Eng. Technol. Appl. Sci. Res., vol. 9, no. 4, pp. 4423–4427, Aug. 2019.

Metrics

Abstract Views: 565
PDF Downloads: 389

Metrics Information

Most read articles by the same author(s)