Project Details
Description
Deepfakes are synthetically generated media that pose as actual video recordings. A quintessential example is the face of a person in a video being replaced by someone else who is originally not in it. While deepfakes can be used positively, the majority are ill-intentioned for the purposes of pornography and misinformation dissemination. Consequently, deepfake identification is now an emerging area of research where most work is on the use of machine learning techniques that identify audio-visual imperfections. While such research is useful, a critical gap is that the human perspective of deepfake identification is missing. This is because despite advances in research, identification algorithms are currently not performing at a level where human judgement is unneeded. Further, deepfakes are relatively new to many people who may then fall prey to such misinformation, unaware that videos could be so well doctored. Set in this context, the proposed research is underpinned by three I's, namely identification, impact and instruction, and its specific objectives are to: (1) establish baseline deepfake identification strategies undertaken by people; (2) ascertain factors that impact the effectiveness of these baseline strategies; (3) examine the impact of deepfakes on people; and (4) create an instructional program to teach people about deepfakes. These objectives are accomplished through three streams of work. Each stream will be buttressed by theoretical frameworks drawn from fields such as information science, psychology, game design, and instructional design. Stream 1 addresses the first objective and encompasses two studies, one that requires participants to identify real and deepfake videos in a naturalistic setting and elucidate their strategies in an interview, and another that compares their performance against existing deepfake identification algorithms. The work will employ Hilligoss and Rieh's (2008) framework for information credibility assessment (FICA). It presents a unified conceptualization of three levels of credibility judgments, but whose potential has not been harnessed in the deepfake context. Complementing this framework is the Elaboration Likelihood Model (ELM), which will be used to understand how people process videos in their attempt to identify their authenticity. Stream 2 addresses the second and third objectives. One study employs a large-scale experiment to determine influential factors on deepfake identification strategies, and people's resulting perceptions of trust in online media and self-efficacy. It will be underpinned by both the FICA and ELM, as in Stream 1. Recognizing that the impact on deepfake victims have not been systematically investigated, the other study employs a survey and interview to understand the potential harms faced by deepfake victims. It will employ Carlson and Dalenberg's (2000) framework for the impact of traumatic experiences, which comprehensively characterizes the responses of people experiencing traumatic events. Finally, Stream 3 creates a gamified deepfake instructional program which covers areas such as an introduction to the technology, its impact on people, and strategies for identifying them. The work will be user-centered, and involve potential users throughout its three phases comprising instructional and application design, implementation and summative evaluation. To ensure that the gamified program meets instructional and enjoyability goals, the phases will be guided by the Attention Relevance Confidence and Satisfaction Model, which is used in instructional design, and the GameFlow Model, which describes the elements of game enjoyment. The proposed project addresses a problem that is increasingly prevalent - the sharing of misinformation online, and is timely in its focus on deepfakes. When compared with other media types, the richness of content in videos may be perceived to be more believable. Moreover, the novelty of the medium may enhance credibility of the message because of unfamiliarity that videos could be so well falsified. Consequently, we contend that there is an urgent need to understand how well people can identify deepfakes, and what makes this task challenging. Further, by understanding how deepfakes influence people's attitudes as well as the harms they bring to their victims, our findings can help stakeholders better craft policies and interventions to mitigate their impact. As part of this latter endeavor, the proposed project addresses a practical need for deepfake instruction. The deliverables will thus serve as an important starting point to tackle this form of misinformation.
Status | Active |
---|---|
Effective start/end date | 2/1/23 → 1/31/26 |
Funding
- National Research Foundation Singapore
ASJC Scopus Subject Areas
- Computer Science(all)
- Economics, Econometrics and Finance(all)
- Development
- Geography, Planning and Development
- Social Sciences (miscellaneous)
- Engineering(all)