Empirical study on Microsoft malware classification

CHIVUKULA, Rohit, VAMSI, Mohan, TANGIRALA, Jaya Lakshmi and HARINI, Muddana (2021). Empirical study on Microsoft malware classification. International Journal of Advanced Computer Science and Applications, 12 (3), 509-515. [Article]

Documents
33301:638472
[thumbnail of Tangirala-EmpiricalStudyOnMicrosoft(VoR).pdf]
Preview
PDF
Tangirala-EmpiricalStudyOnMicrosoft(VoR).pdf - Published Version
Available under License Creative Commons Attribution.

Download (549kB) | Preview
Abstract
A malware is a computer program which causes harm to software. Cybercriminals use malware to gain access to sensitive information that will be exchanged via software infected by it. The important task of protecting a computer system from a malware attack is to identify whether given software is a malware. Tech giants like Microsoft are engaged in developing anti-malware products. Microsoft's anti-malware products are installed on over 160M computers worldwide and examine over 700M computers monthly. This generates huge amount of data points that can be analyzed as potential malware. Microsoft has launched a challenge on coding competition platform Kaggle.com, to predict the probability of a computer system, installed with windows operating system getting affected by a malware, given features of the windows machine. The dataset provided by Microsoft consists of 10,868 instances with 81 features, classified into nine classes. These features correspond to files of type asm (data with assembly language code) as well as binary format. In this work, we build a multi class classification model to classify which class a malware belongs to. We use K-Nearest Neighbors, Logistic Regression, Random Forest Algorithm and XgBoost in a multi class environment. As some of the features are categorical, we use hot encoding to make them suitable to the classifiers. The prediction performance is evaluated using log loss. We analyze the accuracy using only asm features, binary features and finally both. xGBoost provide a better log-loss value of 0.078 when only asm features are considered, a value of 0.048 when only binary features are used and a final log loss of 0.03 when all features are used, over other classifiers.
More Information
Statistics

Downloads

Downloads per month over past year

Metrics

Altmetric Badge

Dimensions Badge

Share
Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Actions (login required)

View Item View Item