Empirical study on Microsoft malware classification

CHIVUKULA, Rohit, VAMSI, Mohan, TANGIRALA, Jaya Lakshmi and HARINI, Muddana (2021). Empirical study on Microsoft malware classification. International Journal of Advanced Computer Science and Applications, 12 (3), 509-515.

[img]
Preview
PDF
Tangirala-EmpiricalStudyOnMicrosoft(VoR).pdf - Published Version
Creative Commons Attribution.

Download (549kB) | Preview
Official URL: https://thesai.org/Publications/ViewPaper?Volume=1...
Open Access URL: https://thesai.org/Publications/ViewPaper?Volume=1... (Published version)

Abstract

A malware is a computer program which causes harm to software. Cybercriminals use malware to gain access to sensitive information that will be exchanged via software infected by it. The important task of protecting a computer system from a malware attack is to identify whether given software is a malware. Tech giants like Microsoft are engaged in developing anti-malware products. Microsoft's anti-malware products are installed on over 160M computers worldwide and examine over 700M computers monthly. This generates huge amount of data points that can be analyzed as potential malware. Microsoft has launched a challenge on coding competition platform Kaggle.com, to predict the probability of a computer system, installed with windows operating system getting affected by a malware, given features of the windows machine. The dataset provided by Microsoft consists of 10,868 instances with 81 features, classified into nine classes. These features correspond to files of type asm (data with assembly language code) as well as binary format. In this work, we build a multi class classification model to classify which class a malware belongs to. We use K-Nearest Neighbors, Logistic Regression, Random Forest Algorithm and XgBoost in a multi class environment. As some of the features are categorical, we use hot encoding to make them suitable to the classifiers. The prediction performance is evaluated using log loss. We analyze the accuracy using only asm features, binary features and finally both. xGBoost provide a better log-loss value of 0.078 when only asm features are considered, a value of 0.048 when only binary features are used and a final log loss of 0.03 when all features are used, over other classifiers.

Item Type: Article
Uncontrolled Keywords: 0803 Computer Software; 1005 Communications Technologies; 46 Information and computing sciences
Identification Number: https://doi.org/10.14569/ijacsa.2021.0120361
Page Range: 509-515
SWORD Depositor: Symplectic Elements
Depositing User: Symplectic Elements
Date Deposited: 01 Mar 2024 11:10
Last Modified: 01 Mar 2024 11:15
URI: https://shura.shu.ac.uk/id/eprint/33301

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics