The motivation of this project is to conclude what the public response is towards the rise of AI and what are the concerns that provoke the public to stand against AI and what kind of mitigation I can come up with to make the public comfortable.
There has been a lot of criticism of people not trusting AI as AI can have many trust issues causing rejection of the AI technology since AI works by collecting data to find trends and predictions (Ramuthi, 2023).
This project will aim to find the concerns that people have with the rise of AI and how it has an impact on their lives such as unemployment due to automation (United Nations, n.d.), or ethical issues such as using private information for predictions (Zhang et al., 2021).
This search will help me to find the complaints that the public has regarding the rise and implementation of AI. I will use the forms to collect the data and analyze it to find the trends in the trust issues and try to find ways to mitigate the situation by suggesting the sanity of AI technology and its adaptation.
Firstly, I will carry out a literature review where I will go through the research and journals and see the concerns people have over modern technology, then I will focus more on the trust people have in AI. For my primary research, I will carry out data collection which will be done by implementing a website where people will complete the surveys and the results will be updated. To make visualization, the graphs will be updated in real-time.
Once I have collected enough data, I will then use different analysis methods and tools to find the trends and patterns and give my conclusions on it. With this project, I will also be able to demonstrate how the data is collected and used for only purposes that it has been required for, and after the research is completed, the data can be deleted.
With this project, I will be able to send across the issues that the public has regarding AI and what law enforcement can do for the public to adapt to the new changes and help the public get their trust in AI safety.