Open Access Open Access  Restricted Access Subscription Access

Virtuality Keyboard by AI

OM. Katyarmal, Vishakha Dhande, Pranjali Mundane, Kashish Pandit, S.S. Sagane


As everyone knows, computer vision is used to process images in machine learning. Today, for this wonderful task, we are using OpenCV and the cv-zone library in Python. Everyone is familiar with OpenCV, and the cv-zone is essentially a library for hand detection, pose detection, face detection, and other related tasks. By this, we created a keyboard named as “Virtuality Keyboard”. It is just another example of today's computer trend of making things smaller and faster. Computing is no longer limited to desktops and laptops; it has also made its way into mobile devices such as palmtops and cell phones. But the input device, the good old QWERTY keyboard, has not changed in the last 50 years or so. The most recent advancement is virtuality keyboard technology. The virtuality keyboard technology uses computer vision technology and artificial intelligence to let users work in air, by just navigating fingers in air to access keyboard keys. OpenCV is the most popular library for the task of computer vision; it is a cross-platform open-source library for machine learning, image processing, and other tasks that is used to develop real-time computer vision applications. CVzone is a computer vision package that uses the OpenCV and Media Pipe libraries as its core, making it simple to run applications such as hand tracking, face detection, facial landmark detection, pose estimation, and so on, as well as image processing and other computer vision-related applications. The modules that were required were then installed. The CVzone Hand Detector module, Hand Tracking Module, and then import Controller from Pynput keyboard to make the virtual keyboard work. Hand detector was assigned to the detector after we initialised it with a detection confidence of 0.8. Then, based on the layout of our keyboard, we created an array of lists and defined an empty string to store the typed keys.


OpenCV (Computer Vision), media pipe, CV zone, Pynput, keyboard, hand tracking module

Full Text:



Theodoros Evgeniou, Massimiliano Pontil. Regularized multi--task learning. 2004. Available from

Christopher Latham Sholes: American inventor [Online]. Available from

History of Information. Sholes, Soule, & Glidden Invent the First Device to Allow the Operator to Write Faster than a Person Writing by Hand. 1868 [Online]. Available from

Deaf People and Technology Compendium. Teleprinter [Online]. Available from

Wikipedia. Charles L. Krum [Online]. Available from

Jing Gao, Wei Fan, Jing Jiang, Jiawei Han. Knowledge Transfer Via Multiple Model Local Structure Mapping. RESEARCH COLLECTION SCHOOL OF COMPUTING AND INFORMATION SYSTEMS. 2008. 283-291.

Baudel T, Beaudouin-Lafon M. Charade: remote control of objects using free-hand gestures. Commun ACM. 1993; 36(7): 28 -35.

Masahiko Terajima, Yoshihide Furuichi, et al. A 3-dimensional method for analyzing facial soft-tissue morphology of patients with jaw deformities. American journal of orthodontics and dentofacial orthopedics: official publication of the American Association of Orthodontists, its constituent societies, and the American Board of Orthodontics 2009;135(6):715-22.

Bérci N, Szolgay P. Towards a gesture-based human-machine interface: Fast 3D tracking of the human fingers on high-speed smart camera computers. In IEEE International Symposium on Circuits and Systems. 2009; 1217–1220.

Baxter J. A model of inductive bias learning. J Artif Intell Res (JAIR). 2000; 12: 149–198.

Billon R, Nedelec A, Tisseau J. Gesture recognition in flow based on PCA analysis using multiagent system. In Proceedings of the 2008 International Conference on Advances in Computer Entertainment Technology. 2008; 139–146.

Gadiparthi Manjunath, Nune Sreenivas. A Novel Approach for Inbuilt Virtual In-put Devices for Human Computer Interaction. International Journal of Innovative Research in Computer and Communication Engineering (IJIRCCE). 2016 Aug; 4(7): 14127–14130. 13. Rishikesh Kumar, Poonam Chaudhary. User defined custom virtual keyboard. 2016 International Conference on Information Science (ICIS). 2016; 18–22.

Manikanthan SV, Baskaran K. Low Cost VLSI Design Implementation of Sorting Network for ACSFD in Wireless Sensor Network. CIIT International Journal of Programmable Device Circuits and Systems. 2011; 3(14): 816–819. Print: ISSN 0974-973X & Online: ISSN 0974-9624, Issue: November 2011, PDCS112011008.

Patil ID, Lambhate P. Virtual keyboard interaction using eye gaze and eye blink. Int J Recent Innov Trends Comput Commun (IJRITCC). 2015; 3(7): 4849–4852.

Pandian Vasant; Ivan Zelinka; Gerhard-Wilhelm Weber. Intelligent computing and optimization: proceedings of the 2nd International Conference on Intelligent Computing and Optimization 2019 (ICO 2019). Advances in Intelligent Systems and Computing. 2020;1072. 17. Sahu G, Mittal S. Controlling mouse pointer using web cam. Int Res J Eng Technol. 2016; 3(10): 1230–1233.

Uddin Ahmed T, Jamil MN, Hossain MS, Andersson K, Hossain MS. An integrated real-time deep learning and belief rule base intelligent system to assess facial expression under uncertainty. In 9th International Conference on Informatics, Electronics & Vision (ICIEV). IEEE Computer Society. 2020; 1–6.

Pooja C. Shindhe and Sangeetha Goud. Mouse Free Cursor Control. Bonfring International Journal of Research in Communication Engineering. 2016;6(Special Issue):92-98.

Cao WM, Lu F, Gu YB, Peng H, Wang S. Study of human face recognition based on principal component analysis (PCA) and direction basis function neural networks. In 5th World Congress on Intelligent Control and Automation. 2004; 5: 4195–4198.



  • There are currently no refbacks.

Copyright (c) 2022 Journal of Computer Technology & Applications