Gateway to Think Tanks
来源类型 | Report |
规范类型 | 报告 |
DOI | https://doi.org/10.7249/RRA519-1 |
来源ID | RR-A519-1 |
Human-machine detection of online-based malign information | |
William Marcellino; Kate Cox; Katerina Galai; Linda Slapakova; Amber Jaycocks; Ruth Harris | |
发表日期 | 2020-06-23 |
出版年 | 2020 |
页码 | 69 |
语种 | 英语 |
结论 | Social media is increasingly being used by human and automated users to distort information, erode trust in democracy and incite extremism
Our research produced a machine learning model that can successfully detect Russian trolls
To trial the model's portability, a promising next step in our effort could be to test our model in a new context such as the online Brexit debate
|
摘要 | As social media is increasingly being used as people's primary source for news online, there is a rising threat from the spread of malign and false information. With an absence of human editors in news feeds and a growth of artificial online activity, it has become easier for various actors to manipulate the news that people consume. Finding an effective way to detect malign information online is an important part of addressing this issue. RAND Europe was commissioned by the UK Ministry of Defence's (MOD) Defence and Security Accelerator (DASA) to develop a method for detecting the malign use of information online. The study was contracted as part of DASA's efforts to help the UK MOD develop its behavioural analytics capability. ,Our study found that online communities are increasingly being exposed to junk news, cyber bullying activity, terrorist propaganda, and political reputation boosting or smearing campaigns. These activities are undertaken by synthetic accounts and human users, including online trolls, political leaders, far-left or far-right individuals, national adversaries and extremist groups. In support of government efforts to detect and counter these activities, the research team successfully developed and applied a machine learning model in a Russian troll database to identify differences between authentic political supporters and Russian trolls shaping online debates regarding the 2016 US presidential election. To trial the model's portability, a promising next step could be to test the model in a new context such as the online Brexit debate. |
目录 |
|
主题 | Big Data ; Information Operations ; The Internet ; Machine Learning ; Russia ; Social Media Analysis ; United States |
URL | https://www.rand.org/pubs/research_reports/RRA519-1.html |
来源智库 | RAND Corporation (United States) |
引用统计 | |
资源类型 | 智库出版物 |
条目标识符 | http://119.78.100.153/handle/2XGU8XDN/524133 |
推荐引用方式 GB/T 7714 | William Marcellino,Kate Cox,Katerina Galai,et al. Human-machine detection of online-based malign information. 2020. |
条目包含的文件 | ||||||
文件名称/大小 | 资源类型 | 版本类型 | 开放类型 | 使用许可 | ||
RAND_RRA519-1.pdf(3408KB) | 智库出版物 | 限制开放 | CC BY-NC-SA | 浏览 | ||
1599591211928.jpg(9KB) | 智库出版物 | 限制开放 | CC BY-NC-SA | ![]() 浏览 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。