Thi Tran, Binghamton University – Creating Tools to Better Track Online Misinformation

Which misinformation will cause the most harm is important to figure out. But how?

Thi Tran, assistant professor of management information systems at Binghamton University, looks for a little help.

Thi Tran is currently an assistant professor of management information systems at the Binghamton University School of Management. He holds a PhD in Information Technology, specializing in cyber security research from the Department of Information Management and Cyber Security at the University of Texas at San Antonio. He also obtained a Master of Science in Information Technology and Management, specializing in data analytics and IT project management from The University of Texas at Dallas. Before joining the information technology field, he graduated from a Bachelor of Business Administration program at the University of Economics in Ho Chi Minh City, Vietnam, specializing in human resource management and strategic management.

He has been teaching management information systems courses since 2020, covering various aspects, including cyber security, information assurance, data analytics, artificial intelligence, Internet of Things, big data, and IT project management.

Creating Tools to Better Track Online Misinformation

Every day, updates and breaking news flood the web and our social media feeds. This deluge of content has escalated public demand for reliable information. The perils of fake news have prompted news outlets, social media platforms, and government bodies to adopt new strategies, emphasizing fact-checking and flagging misleading content.

But not all misinformation is equal. How can content creators and mitigators focus their efforts against misinformation that poses the most harm? My research is looking at solutions using a machine learning framework and expanded use of blockchain technology to identify where misinformation will cause the most harm.

Our research centers on identifying the factors that amplify the harm potential of misinformation. The machine-learning systems would assess the potential harm caused by content using algorithms, data, and user characteristics to create a harm index. This index would reflect the severity of possible harm to a person if exposed to fake news, helping content creators and mitigators more easily identify misinformation that could cause havoc if allowed to spread unchecked.

Notably, our research explores the use of blockchain technology in the fight against fake news alongside assessing user acceptance of the technology. Our research model allows us to test different theories and then prove which is the best way for us to convince people to use this technology to combat misinformation.

We propose a survey of 1,000 people that targets two key groups, fake news mitigators and content users, and gauges their willingness to adopt three existing blockchain systems in different scenarios.

Ultimately, our research aims to create tools to make tracking and removing misinformation easier. We also hope it helps the public to be more aware of the patterns to help keep misinformation from spreading unintentionally.

No Responses