Assignment 2 Machine Learning Modelling Background Early Incident Identification Disc Consulting Enterprises (DCE) has identified some potentially suspicious attacks on their network and computer systems. The attacks are thought to be a new type of attack from a skilled threat actor. To date, the attacks have only been identified ‘after the fact’ by examining

Assignment 2

Machine Learning Modelling

Background

Early Incident Identification

Disc Consulting Enterprises (DCE) has identified some potentially suspicious attacks on their network and computer systems. The attacks are thought to be a new type of attack from a skilled threat actor. To date, the attacks have only been identified ‘after the fact’ by examining post-exploitation activities of the attacker on compromised systems. Unfortunately, the attackers are skilled enough to evade detection and the exact mechanisms of their exploits have not been identified.

The incident response team, including IT services, security operations, security architecture, risk management, the CISO (Chief Information Security Officer), and the CTO (Chief Technology Officer) have been meeting regularly to determine next steps.

It has been suggested that the security architecture and operations teams could try to implement some real-time threat detection using machine learning models that build on earlier consultancy your firm has completed (i.e., building upon your Assessment 1 work).

Data description

The data have already been provided (in Assessment 1), and the ML team (you) have undertaken some initial cleaning and analysis.

Things to keep in mind:

Each event record is a snapshot triggered by an individual network ‘packet’. The exact triggering conditions for the snapshot are unknown. But it is known that multiple packets are exchanged in a ‘TCP conversation’ between the source and the target before an event is triggered and a record created. It is also known that each event record is anomalous in some way (the SIEM logs many events that

maybe suspicious).
The malicious events account for a very small amount of data. As such, your

training needs to consider the “imbalanced” data and the effect these data may have on accuracy (both specificity and sensitivity).

A very small proportion of the data are known to be corrupted by their source systems and some data are incomplete or incorrectly tagged. The incident response team indicated this is likely to be less than a few hundred records.

 

 

Assembled Payload Size (continuous)
The total size of the inbound suspicious payload. Note: This would contain the data sent by the attacker in the “TCP conversation” up until the event was triggered

DYNRiskA Score (continuous)
An un-tested in-built risk score assigned by a new SIEM plug-in

IPV6 Traffic (binary)
A flag indicating whether the triggering packet was using IPV6 or IPV4 protocols (True = IPV6)

Response Size (continuous)
The total size of the reply data in the TCP conversation prior to the triggering packet

Source Ping Time (ms) (continuous)
The ‘ping’ time to the IP address which triggered the event record. This is affected by network structure, number of ‘hops’ and even physical distances.

E.g.:

< 1 ms is typically local to the

□  device

l-5ms is usually located in the

□  local network

5-50ms is often geographically

□  local to a country

□  ~100-250ms is trans-continental to servers

□  250+ may be trans-continental to a small network.

Note, these are estimates only and many factors can influence ping times.

Operating System (Categorical)
A limited ‘guess’ as to the operating system that generated the inbound suspicious connection. This is not accurate, but it should be somewhat consistent for each ‘connection’

Connection State (Categorical)
An indication of the TCP connection state at the time the packet was triggered.

Connection Rate (continuous)
The number of connections per second by the inbound suspicious connection made prior to the event record creation

Ingress Router (Binary)
DCE has two main network connections to the ‘world’. This field indicates which connection the events arrived through

Server Response Packet Time (ms) (continuous)
An estimation of the time from when the payload was sent to when the reply

 

 

 

 
packet was generated. This may indicate server processing time/load for the event

Packet Size (continuous)
The size of the triggering packet

Packet TTL (continuous)
The time-to-live of the previous inbound packet. TTL can be a measure of how many ‘hops’ (routers) a packet has traversed before arriving at our network.

Source IP Concurrent Connection (Continuous)
How many concurrent connections were open from the source IP at the time the event was triggered

Class (Binary)
Indicates if the event was confirmed malicious, i.e. 0 = Non-malicious, 1 = Malicious

 

 

The needle in the haystack

The data were gathered over a period of time and processed by several systems in order to associate specific events with confirmed malicious activities. However, the number of confirmed malicious events was very low, with these events accounting for less than 1% of all logged network events.

Because the events associated with malicious traffic are quite rare, rate of ‘false negatives’ and ‘false positives’ are important.

Scenario

Following the meetings of the security incident response team, it has been decided to try to make an ‘early warning’ system that extends the functionality of their current SIEM. It has been proposed that DCE engage 3rd party developers to create a ‘smart detection plugin’ for the SIEM.

The goal is to have a plug-in that would extract data from real-time network events, send it to an external system (your R script) and receive a classification in return. However, for the plugin to be effective it must consider the alert-fatigue experienced by security operations teams as excessive false-positives can lead to the team ignoring real incidents. But, because the impact of a successful attack is very high, false negatives could result in attackers overtaking the whole network.

You job

Your job is to develop the detection algorithms that will provide the most accurate incident detection. You do not need to concern yourself about the specifics of the SIEM plugin or software integration, i.e., your task is to focus on accurate classification of malicious events using R.

 

You are to test and evaluate two machine learning algorithms to determine which supervised learning model is best for the task as described.

Task

You are to import and clean the same MLData2023.csv, that was used in the previous assignment. Then run, tune and evaluate two supervised ML algorithms (each with two types of training data) to identify the most accurate way of classifying malicious events.

Part 1 – General data preparation and cleaning

Import the csv into R Studio. This version is the same as Assignment 1.
Write the appropriate code in R Studio to prepare and clean the MLData2023 dataset as follows:
Clean the whole dataset based on what you have suggested / feedback received for Assignment 1.
Filter the data to only include cases labelled with Class = 0 or 1.

For the feature Operating.System, merge the three Windows categories together to form a new category, say Windows_All. Furthermore, merge iOS, Linux (Unknown), and Other to form the new category named Others. Hint: use the forcats:: fct_collapse(.) function.

Similarly, for the feature Connection.State, merge INVALID, NEW and RELATED for form the new category named Others.
Select only the complete cases using the na.omit(.) function, and name the dataset MLData2023_cleaned.

Briefly outline the preparation and cleaning process in your report and why you believe the above steps were necessary.

Use the code below to generated two training datasets (one unbalanced mydata.ub.train and one balanced mydata.b.train) along with the testing set (mydata.test). Make sure you enter your student ID into the command set.seed(.).

 

Separate samples of non-maLicious and malicious events

dat.class0 <- MLData2023_cleaned %>% filter(Class == 0) # non-maLicious dat.classl <- MLData2023_cleaned %>% filter(ciass == l) # malicious

Randomly select 19800 non-maLicious and 200 malicious samples, then combine them to form the training samplesseed(Enter your student ID)

rows.train0 <- sample(1:nrow(dat.class0), size = 19800, replace = FALSE) rows.trainl <- sample(1:nrow(dat.class1), size = 200, replace = FALSE)

Your 20000 unbalanced training samples

train.class0 <- dat.class0[rows.train0,] # Non-malicious samples train.class1 <- dat.class1[rows.train1,] # Malicious samples mydata.ub.train <- rbind(train.class0, train.class1) mydata.ub.train <- mydata.ub.train %>%

mutate(Class = factor(Class, labels = c(“NonMal”,”Mal”)))

Your 39600 balanced training samples,e. 19800 non-malicious and malicious sam ples each. set.seed(123)

train.class1_2 <- train.class1[sample(1:nrow(train.class1), size = 19800, replace = TRUE),] mydata.b.train <- rbind(train.class0, train.class1_2) mydata.b.train <- mydata.b.train %>%                                                                                                   mutate(Class =

factor(Class, labels = c(“NonMal”,”Mal”)))

Your testing samples

test.class0 <- dat.class0[-rows.train0,] test.class1 <- dat.class1[-rows.train1,] mydata.test <- rbind(test.class0, test.class1) mydata.test <- mydata.test %>%

mutate(Class = factor(Class, labels = c(“NonMal”,”Mal”)))

Note that in the master data set, the percentage of malicious events is less than 1%. This distribution is roughly represented by the unbalanced data. The balanced data is generated based on up-sampling of the minority class using bootstrapping. The idea here is to ensure the trained model is not biased towards the majority class, i.e. non-malicious event.

Part 2 – Compare the performances of different ML algorithms

Randomly select two supervised learning modelling algorithms to test against one another by running the following code. Make sure you enter your student ID into the command set.seed(.). Your 2 ML approaches are given by myModels.

set.seed(Enter your student ID)

 

 

models.listl <- c(“Logistic Ridge “Logistic LASSO “Logistic E] models.list2 <- c(“Classification “Bagging Tree”, “Random Forest”) myModels <- c(sample(models.list1, sample(models.list2, myModels %>%

data.frame

For each of your two ML modelling approaches, you will need to:

Run the ML algorithm in R on the two training sets with Class as the outcome variable.
Perform hyperparameter tuning to optimise the model:

Outline your hyperparameter tuning/searching strategy for each of the ML modelling approaches. Report on the search range(s) for hyperparameter tuning, which fc-fold CV was used, and the number of repeated CVs (if applicable), and the final optimal tuning parameter values and relevant CV statistics (i.e. CV results, tables and plots), where appropriate.
If your selected tree model is Bagging, you must tune the nbagg, cp and minsplit hyperparameters, with at least 3 values for each.
If your selected tree model is Random Forest, you must tune the num.trees and mtry hyperparameters, with at least 3 values for each.
Be sure to set the randomisation seed using your student ID.

Evaluate the predictive performance of your two ML models, derived from the balanced and unbalanced training sets, on the testing set. Provide the confusion matrices and report and describe the following measures in the context of the project:

False positive rate
False negative rate
Overall Accuracy
Precision
Recall
F-score

For the precision, recall and F-score metrics, you will need to do a bit of research as to how they can be calculated. Make sure you define each of the above metrics in the context of the study.

Provide a brief statement on your final recommended model and why you chose it. Parsimony, and to a lesser extent, interpretability maybe taken into account if

the decision is close. You may outline your penalised model if it helps with your

argument.

What to submit

Gather your findings into a report (maximum of 5 pages) and citing relevant sources, if necessary.

Present how and why the data was ‘cleaned and prepared’, how the ML models were tuned and provide the relevant CV results. Lastly, present how they performed to each other in both the unbalanced and balanced scenarios. You may use graphs, tables and images where appropriate to help your reader understand your findings. All tables and figures should be appropriately captioned, and referenced in-text.

Make a final recommendation on which ML modelling approach is the best for this task.

Your final report should look professional, include appropriate headings and subheadings, should cite facts and reference source materials in APA-7th format.

Your submission must include the following:

Your report (5 pages or less, excluding cover/contents/reference/appendix page). The report must be submitted through TURNITIN and checked for originality.
A copy of your R code, and three csv files corresponding to your two training sets and a testing set. The R code and data sets are to be submitted separately via another submission link.

Note that no marks will be given if the results you have provided cannot be confirmed by your code. Furthermore, all pages exceeding the 5-page limit will not be read or examined.

Marking Criteria

 

 

Accurate implementation data cleaning and of each supervised machine learning algorithm in R.

□ Strictly about code

(1)  Does the code work from start to finish?

(2) Are the results reproducible?

(3) Are all the steps performed correctly?

(4)  Is it your own work?

Explanation of data cleaning and preparation.

•       Corresponds to Part 1 b)

•       Briefly outline the reasons for sub-parts (i) and (ii).

•       Provide justifications for merging of categories, i.e. sub- part(iii) and (iv).

An outline of the selected modelling approaches, the hyperparameter tuning and search strategy, the corresponding performance evaluation in the training sets (i.e. CV results, tables and plots), and the optimal tuning hyperparameter values.

•       Penalised logistic regression model – Outline the range of value for your lambda and alpha (if elastic-net). Plot/tabulate the CV results. Outline the optimal value(s) of your hyperparameter(s). Outline the coefficients if required for your arguments of model choice.

•       Tree models – Outline the range of the hyperparameters (bagging and RF). Justify your choice of cp values.

Tabulate the CV results, i.e. the top combinations and the optimal OOB misclassification error.

10%

20%

Presentation, interpretation and comparison of the performance measures (i.e. confusion matrices, false positives, false negatives, and etc) among the selected ML algorithms. Justification of the recommended modelling approach.

Provide the confusion matrices (frequencies, proportions) □ in the test set.

30%

Predicted/Actual
Yes
No

Yes
Freql (Sensitivity %)
FreqZ (False positives

%)

No
Freq3 (False negatives %)
Freq4 (Specificity %)

□ Interpretation of the above metrics, including accuracy,

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

precision, recall and F-score) in the context of the study.
 

Report structure and presentation (including tables and figures, and where appropriate, proper citations and referencing in APA 7th style). Report should be clear and logical, well structured, mostly free from communication, spelling and grammatical errors. –

□  Overall structure, presentation and narrative.

□  Referencing

Table and figures are clear, and properly labelled and

□  referenced.

□  Spelling and grammar.

20%

 

 

 

 

 

Academic Misconduct

Academic misconduct, which includes but is not limited to, plagiarism; unauthorised collaboration; cheating in examinations; theft of other student’s work; collusion; inadequate and incorrect referencing; will be dealt with in accordance with the Rule for Academic Misconduct (including Plagiarism) Policy. Ensure that you are familiar with the Academic Misconduct Rules.

 

 

 

CLAIM YOUR 30% OFF TODAY

X
Don`t copy text!
WeCreativez WhatsApp Support
Our customer support team is here to answer your questions. Ask us anything!
???? Hi, how can I help?