Assessing the Impact of AI and Virtual Reality on Strengthening Cybersecurity Resilience Through Data Techniques
Praveen Kumar Maroju
Vol 10, Special Issue 2024
Page Number: 1 - 9
Abstract:
Technological developments in AI present civilization with enormous prospects. At the same time, there is a growing need to address new implications. Because of this, the emphasis is frequently placed on moral and secure design to prevent inadvertent mistakes. On the other hand, methods focused on cybersecurity and AI safety also take into account instances of deliberate evil, such as immoral and hostile AI design. Recently, there has been a similar focus on malevolent actors in relation to virtual reality (VR) security and safety. Thus, even while the nexus of AI and VR (AIVR) presents a plethora of advantageous opportunities for cross-fertilization, considering the possible socio-psycho-technological ramifications, it is imperative to foresee future malevolent AIVR design from the outset. This study examines the potential application of Generative AI (deepfake techniques) for deception in immersive journalism, as a simplified example. From an immersive co-creation perspective, we believe that defenses against such future AIVR safety hazards associated to lying in immersive contexts should be envisaged trans disciplinarily. We initially drive a cybersecurity-focused process to produce defenses through immersive design fictions.
References
- A. Dafoe, “AI governance: a research agenda,” Governance of AI Program, Future of Humanity Institute, University of Oxford: Oxford, UK, 2018.
- V. Dignum, “AI is multidisciplinary,” AI Matters, vol. 5, no. 4, pp. 18–21, 2020.
- D. Hadfield-Menell, S. J. Russell, P. Abbeel, and A. Dragan, “Cooperative inverse reinforcement learning,” in Advances in neural information processing systems, 2016, pp. 3909–3917.
- J. Leike, D. Krueger, T. Everitt, M. Martic, V. Maini, and S. Legg, “Scalable agent alignment via reward modeling: a research direction,” arXiv preprint arXiv:1811.07871, 2018.
- D. Peters, K. Vold, D. Robinson, and R. A. Calvo, “Responsible AI – Two Frameworks for Ethical Design Practice,” IEEE Transactions on Technology and Society, vol. 1, no. 1, pp. 34–47, 2020.
- N. Soares and B. Fallenstein, “Agent foundations for aligning machine intelligence with human interests: a technical research agenda,” in The Technological Singularity. Springer, 2017, pp. 103–125.
- A. Asilomar, “Principles.(2017),” in Principles developed in conjunction with the 2017 Asilomar conference [Benevolent AI 2017], 2018. Authorized licensed use limited to: Indian Institute Of Technology (IIT) Mandi. Downloaded on August 02,2024 at 05:06:08 UTC from IEEE Xplore. Restrictions apply.
- N.-M. Aliman, P. Elands, W. H¨urst, L. Kester, K. R. Th´orisson, P. Werkhoven, R. Yampolskiy, and S. Ziesche, “Error-Correction for AI Safety,” in International Conference on Artificial General Intelligence. Springer, 2020, pp. 12–22.
- M. Brundage, S. Avin, J. Clark, H. Toner, P. Eckersley, B. Garfinkel, A. Dafoe, P. Scharre, T. Zeitzoff, B. Filar et al., “The Malicious Use of Artificial Intelligence: Forecasting,” Prevention, and Mitigation, 2018.
- F. Pistono and R. V. Yampolskiy, “Unethical Research: How to Create a Malevolent Artificial Intelligence,” CoRR, vol. abs/1605.02817, 2016. [Online]. Available: http://arxiv.org/abs/1605.02817
- R. V. Yampolskiy and M. Spellchecker, “Artificial intelligence safety and cybersecurity: A timeline of AI failures,” arXiv preprint arXiv:1610.07997, 2016.
- K. Pearlman, “Virtual Reality Brings Real Risks: Are We Ready?” USENIX Association, 2020.
- P. Casey, I. Baggili, and A. Yarramreddy, “Immersive virtual reality attacks and the human joystick,” IEEE Transactions on Dependable and Secure Computing, 2019.
- A. Gulhane, A. Vyas, R. Mitra, R. Oruche, G. Hoefer, S. Valluripally, P. Calyam, and K. A. Hoque, “Security, Privacy and Safety Risk Assessment for Virtual Reality Learning Environment Applications,” in 2019 16th IEEE Annual Consumer Communications & Networking Conference (CCNC). IEEE, 2019, pp. 1–9.
- S. Baldassi, T. Kohno, F. Roesner, and M. Tian, “Challenges and new directions in augmented reality, computer security, and neuroscience–part 1: Risks to sensation and perception,” arXiv preprint arXiv:1806.10557, 2018.
- J. A. De Guzman, K. Thilakarathna, and A. Seneviratne, “Security and privacy approaches in mixed reality: A literature survey,” ACM Computing Surveys (CSUR), vol. 52, no. 6, pp. 1–37, 2019.
Back Download