Risk Assessment: Harnessing Positive Risks in ICT Systems
Nwagu, Chikezie Kennethα, Omankwu, Obinnaya Chinecheremσ & Okonkwo, Obikwelu Raphaelρ
Every Organization continually assesses her network – data and communication devices of risks, to ensure risks are proactively averted. And in situations where they cannot be avoided, mitigation plans are put in place to ensure very minimal business impact if triggered. This research work is a product of practical investigations of ICT systems with the immediate focus on data and communication devices. This unravels direct utilization/application of positive risks and further emphasizes that risks are not entirely negative. It also ensures protection against negative risks and conscious activation and utilization of positive risks. Risk Assessment is every stakeholder`s responsibility and should be carried out as many times as possible. This assists stakeholders to proactively monitor their data and communication devices against invasion and unforeseen vulnerabilities. The general and long standing perception of risks is that risks and their impacts are entirely negative. This further bares salient opportunities (positive Risks) associated with data and communication devices. And examined their inherent dormant features from direct industry and practical application points of view, which could be activated to further enhance their usage and maximize their benefits to organizations and stakeholders. Specific positive risks associated with specific data and communication devices were identified and their applications/ utilizations were also discussed The Methodologies used were highlighted and risk assessment data gathering techniques explicitly x-rayed. Strategies for mitigating negative risks and harnessing positive risk were explicitly discussed too. Of course, principal advantages of positive risks are enumerated. Finally showed how positive risks have immensely contributed to changing trend of computing today, contributed immensely to advent, deployment and use of Cloud computing.
Keywords: positive risks(opportunities), negative risks(threats), risk assessment; vulnerabilities, virtualization, cloud computing.
Author α ρ: Computer Science Department, Nnamdi Azikiwe University Awka Anambra State, Nigeria.
σ : Computer Science Department, Michael Okpara University of Agriculture Umudike Umuahia, Abia State, Nigeria.
This work explored and used Structured System Analysis and Design Method (SSADM), Dynamic System Development Method (DSDM) and Spiral Methodologies.
Reasons for the adoption are their direct applicability and features among which are respectively as follow:
- Intensive users involvement
- Clear and easily understandable documentation
- Procedural Process
- Focuses on Business need and delivers on time
- Communicate continuously and clearly without compromising quality
- For Spiral
- Risk driven and keeps track of risk patterns in a project.
- Iterative and incremental
RISKS AND RISK ASSESSMENT
Risks, contrary to the general notion, are both positive and negative. Therefore Risk, as adapted from Stoneburner, G., Goguen, A. & Feringa, A. (2002, July), is net negative or positive effect of exercise of vulnerabilities or opportunities which can be exploited, enhanced, shared, transferred, or even accepted.
Risk Assessment is a continuous process of identifying, analyzing, prioritizing and evaluating risks. For negative risks, it is done along with thorough evaluation of available controls with intention of recommending more robust and effective controls or enhancing existing controls in order to reduce or eliminate vulnerabilities. Vulnerabilities if exercised, be it intentionally exploited or accidentally triggered could lead to loss of integrity, availability and confidentiality. Hence one of the principal reasons for carrying out risk assessment against negative risks is to assess and ascertain the degree of potency and resilience of the available controls in order to make appropriate control recommendations. This further ensures full protection of the organizations` huge investments and raise awareness of risks trends. It practically assists organizations to be proactively and strategically positioned against any unforeseen threats. For Positive risks, the aim is targeted at maximizing and optimizing the use of additional features of the IT systems to the organization and stakeholders. Therefore, Risk assessment assists in discovering unique features (positive risks) of IT systems which might have been dormant or underutilized.
Good example is in the deployment of intelligent Ethernet switches to work as OSI layer 3 devices in addition to their primary known OSI layer 2 function. This is further explored in section IV according to the research.
RISK ASSESSMENT DATA GATHERING TECHNIQUES
Risk assessment data gathering happens principally in the initial steps of risk assessment process of ICT systems and throughout their life span – before deployment, during deployment and after deployment. However, it is advisable to initiate risk assessment as soon as when the need for the ICT system is established.
The techniques used are:
- Prompt List: This is a predetermined list of risk categories that might give rise to individual IT devices risks type (negative or positive). This was used as a framework which aided in idea generation during risks identification.
- Assumption and Constraint Analysis: Every ICT system is conceived and developed based on a set of assumptions and within a series of constraints. Assumptions and constraint analysis is used to explore the validity of the assumptions and constraints to determine which constitute a risk to ensure full utilization of the systems. Vulnerability maybe identified from the inaccuracy, inconsistency or incompleteness of the assumptions. Constraints may give rise to opportunities (positive risks) through removing or relaxing the limiting factor in the design of the systems as detected in the intelligent Ethernet switches such as cisco intelligent catalyst Ethernet switch 3550, 3580, 4948 etc.
- Root Cause Analysis: This was basically used to discover the underlying causes that lead to problem statement and develop preventive action. It was, therefore, used to identify threats and vulnerabilities with a clear problem statement and of course, exploring which threat or vulnerability might result which made that problem occurred. It was also used to find opportunities by starting with a benefit statement and exploring which opportunities might result in that benefit being realized such as in the virtualization technology (VT) which is used in the popular cloud computing, high availability etc.
- SWOT Analysis: This examined data and communication devices from strengths, weaknesses, opportunities and threats (SWOT) perspectives. SWOT analysis identifies any opportunities in the devices that may be utilized as the strengths and any threats resulting from weaknesses that may be avoided or reduced. The analysis was also used to examine the degree to which the strengths may offset the threats and determines if weaknesses might hinder opportunities.
- Brainstorming: This was used to obtain a comprehensive list of IT devices risks. IT teams in my professional networks and fora were engaged and they performed brainstorming, often with a multidisciplinary set of experts who are not part of the team. As the facilitator, Ideas about ICT devices risk were generated in a traditional free-form brainstorm session. Categories of risk, such as in a risk breakdown structure (RBS), from high level to finer risks level; for example ,type of risks, likely sources, probability of occurrence, motivation etc.
- Delphi Technique: The Delphi technique, as a technique that seeks means of reaching subject Matter Experts (SME) consensus, was used too. And as the facilitator, a simple questionnaire was used to solicit ideas about the important risks – main opportunities and vulnerabilities. The responses from round 1 were summarized and were then recirculated to the experts for further final comment to reinforce earlier responses based on the questions. Consensus was reached in a few rounds of this process. The Delphi technique helped reduce bias in the data and prevented any person from having undue influence on the outcome as responses were submitted anonymously. See result output as attached Appendix A
- Expert Judgement: Risks identified were further validated directly by consulting experts with relevant experience of similar projects or business areas. Such experts were identified and invited via online fora and considered all aspects of ICT devices and suggested possible risks based on their previous experience and areas of expertise. The experts` biases were also taken into account and OEMs` websites/portals and device documentations were checked for confirmation and updates
- Documents Analysis: Risks were identified from structured review of systems/devices documents (technical, administrative etc.) Uncertainty or ambiguity in the documents as well as inconsistencies within a document or between different documents were indicators of risks which propelled further investigations to ascertain clarity.
- Checklist Analysis: Risk identification checklists were developed based on historical information and knowledge that has been accumulated from previous similar systems and from other sources of information. The lowest level of the RBS was used as a risk checklist. These made clear some common IT System vulnerabilities encounter in live environment. However it is advisable that the checklist be prune from time to time to remove or archive related or outdated items. In fact the exercise should incorporate new lessons learned and improve it for use in future IT systems.
- On-site Interviews. Oral Interviews with IT system support and management personnel. (Case study: Mantrac Nigeria Limited – Caterpillar Nigeria) were conducted in order to collect useful information about the IT system (e.g., how the system is operated and managed). On-site visits also allowed for direct observation and gathering of information about the physical, environmental, and operational security of the IT devices. For devices in the design phase, on- site visit which is face-to-face data gathering exercises was used. And it provided the opportunity to evaluate the physical environment in which devices will operate in.
- Use of Automated Scanning Tool. The use of different tools for different platforms (windows, Linux and other open-source OSs) for detecting IT systems vulnerabilities and other Proactive technical methods were used to collect system information efficiently. For example, Software such as MBSA(Microsoft Baseline Security Analyzer), Advisor etc. were used to identify the services that run on a large group of hosts and provided a quick way of building individual profiles of the target IT device(s) which immensely aided in gathering common security vulnerabilities on a stand- alone and networked workstations
V. HARNESSING POSITIVE RISKS
Risks are not entirely negatives as general notion tends to hold. It could, therefore, be positive which opportunity is. And as opportunity, it can either be enhanced, exploited, shared or accepted. These bare most of the latent capabilities of IT devices in addition to their known primary roles. Therefore, these latent capabilities can be harnessed and fully utilized for maximum benefit of these devices. These further offer assurance and basis for justification for the huge capital investment on the devices. Hence Management and organizations could cut down on excess costs associated with purchasing devices which can be effectively and efficiently substituted with devices whose primary roles are needed and also have the required positive risks (opportunities) which are core roles and features of the devices which are being substituted.
An outstanding example is in the use of intelligent Ethernet switches such as Cisco catalyst 3550, 3580 and 4948 intelligent Ethernet switches. The earlier known assertion of routers as layer 3(network layer) device are now the strengths and opportunities (positive risks) which these switches offer.
Layer 3 switching include Layer 3 routing capabilities. Many of the current-generation Catalyst Layer 3 switches can use routing protocols such as BGP, RIP, OSPF, and EIGRP to make optimal forwarding decisions. With these, the switches operate in layer 3 in addition to layer 2(data-link layer) of the OSI model. There is evidence also that cisco catalyst 4948 10 Gigabit Ethernet switch operates in layers 2, 3 and 4 of the OSI layer.
Hence the principal network services offered by the switches, which used to be core functions of routers and routers only, are:
Security - Access Control List (ACL) which is use of Access Control Entries (ACE) to identify and grant access to trustees or bona fide entities seeking access.
The switch Port ACLs and that of the router shared common features. There seems few differences and /or advantages which are mainly due to its occurrence in layer 2.
The switch Port ACLs are similar to Router ACLs.But as you may know, they are supported on physical interfaces and configured on Layer 2 interfaces on a switch. All the three access list types are configurable on the switches such as standard, extended and MAC-extended. Unlike routers, Switch port ACL supports only inbound traffic filtering.
Processing of the Port ACL is similar to that of the Router. The switch examines ACLs associated with features configured on a given interface and permits or denies packet forwarding based on packet-matching criteria in the ACL.
If applied to a trunk port, the ACL filters traffic on all VLANs present on the trunk port. When applied to a port with voice VLAN, the ACL filters traffic on both data and voice VLANs.
The main benefit with Port ACL is that it can filter IP traffic (using IP access lists) and non-IP traffic (using MAC access list). Both types of filtering can be achieved.That is, a Layer 2 interface can have both an IP access list and a MAC access list applied to it at the same time.
Rate-limiting is basically used to limit and control rate of traffics sent or received over a network interface. This offers protection against Denial- of-Service (DoS) attack. This actually limits upload speed on LAN ports and download speed on WAN port.
Advanced Quality of Service (QoS) which enables packets to be queued, classified, prioritized and policed to ensure packets delivery optimization and efficiency. In addition, congestion is avoided by all means possible. The good news here is that configuration of QoS is so simplified through automatic QoS (auto QoS) which is also a feature that detects devices mainly the IP phones and automatically configures the switch for appropriate packet classification and queuing.
Security Enhancement: Using the switches as router does not in any way trade-off security as there are many security features that make them fit-for-purpose – protection of network and administration of traffics, prevention of unauthorized users and granting of granular access to network and its tracking.
The security features are:
Secure Shell (SSH) - as a protocol provides a secure remote access connection to network devices and Communication between the client and server. It is encrypted using enhanced security algorithm in both SSH version 1 and SSH version 2.
Kerberos, according to Cisco press, is a secret-key network authentication protocol. It uses the Data Encryption Standard (DES) cryptographic algorithm for encryption and authentication as it authenticates requests for network resources. Kerberos uses the concept of a trusted third party to perform secure verification of users and services. This trusted third party is called the key distribution center (KDC).Kerberos verifies that users are who they claim to be and the network services that they use are what the services claim to be. To do this, a KDC or trusted Kerberos server issues tickets to users. These tickets, which have a limited life span, are stored in user credential caches. The Kerberos server uses the tickets instead of user names and passwords to authenticate users and network services.
Simple Network Management Protocol version 3 (SNMPv3) protects and encrypt administrative and network management information, from tampering or eavesdropping.
Terminal Access Controller Access Control System (TACACS+) or Remote Access Dial-In User Service (RADIUS) authentication which centralized access control of switches and restricts unauthorized users from altering the configurations.
There is also another option of configuring a local username and password database on the switch itself. In addition, there is Fifteen levels of authorization on the switch console and two levels on the web-based management interface makes different level of access possible which offer administrator with different granular configuration capabilities.
High-performance IP routing is basically routing IP packets between different IP network intelligently. It is a highly advanced form of traditional forwarding of frames via the ports using the forwarding table based on Media Access Control (MAC) Address, instead of IP routing table is used.
These switches have some proprietary architecture such as Cisco Express Forwarding (CEF)-based routing architecture for the cisco switches and this allows for increased scalability and performance. The switches have primarily hardware-based IP routing which also ensures high performance dynamic IP routing. These architectures also allows for very high-speed lookups while ensuring the stability and scalability necessary to meet the dynamic needs of present ICT demands. With these features, these switches can improve network performance when used as a stackable wiring closet switches or as a top-of- the-stack wiring closet aggregator switch.
VLAN and Inter VLAN connection: This is basically creation of logical boundary based on device types, users and functions, for example, there could be finance users VLAN, Servers VLAN or Control systems VLAN which ensures secure and seamless communication of a device type or users etc. within a VLAN. This happens in layer 2 and it`s one of the traditional and primary roles of the switches while interVLAN connection ensures that one VLAN communicates with the other which happens in Layer 3 as communication from one VLAN is routed to others using VLAN Trunks.
Ethernet Switching between OSI layers: 2, 3 and 4: Switches basically carry out ethernet switching function within lay 2 of the OSI model. In addition to this, Ethernet catalyst switches (3500 and 4800 series) extend this function to layers 2 and 3 while 4800 series extends this switching further to layer 4 of the OSI. Based on these, what happens at each layer is briefly explained so that this positive risk is understood and be appreciated. According to Cisco Press (by Sivasubramanian B. et al) the switching process for each layer is as follows:
For Layer 2 switching:
Switching is based on MAC address
Restricts scalability to a few switches in a domain
May support Layer 3 features for QoS or access-control
For Layer 3 switching:
Switching is based on IP address Interoperates with Layer 2 features Enables highly scalable designs
Layer 4 switching:
Switching is based on protocol sessions. In other words, Layer 4 switching uses not only source and destination IP addresses in switching decisions, but also IP session information contained in the TCP and User Datagram Protocol (UDP) portions of the packet. Most common method of distinguishing traffic with Layer 4 switching is to use the TCP and UDP port numbers.
With all these, the switches still keep to their traditional LAN switching property in addition to the positive risks identified thereof as depicted in figures 1 and 2.
Figure 1: Traditional LAN setup without activating and harnessing positive risks associated with intelligent switches.
Traditional LAN setup must have switch and router in order for the packets to be routed in and out of the network.
Figure 2: Utilizing Positive risk of the intelligent switch as both router and switch.
Utilizing positive risks establishes that you only need intelligent Ethernet switches in order to route packets in and out of a network without compromising any security. In fact security is further enhanced and assured.
Another outstanding example is in the Virtualization Technology where a physical device(server or workstation) has capability and capacity to house virtually similar devices(servers or workstations) of same or better specifications and functionality than the compared physical devices(servers or workstations). This is palpable in Microsoft Hypervisor (Hyper-V), VMware EXSI, Oracle virtual box etc.
Historically, Virtualization Technology (VT) has been in existence since 1960s but the Opportunity (Positive Risk) became fully activated and come to the lime light in the 1990s. This is obvious in the Dell Latitude D series in the early 2000s which is VT-enabled but must be activated in the BIOS (Basic Input/output System), just as most other positive risks.
However, nowadays, due to market demands and dynamic technology trend, most Original Equipment Manufacturers (OEMs) make this positive risk principal and default feature of their respective products. With VT approach, instead of having several physical workstations, servers etc. ,one physical server with virtualization capability is purchased and several virtual servers/ workstations are created in it with same or enhanced capabilities (memory (RAM), CPU, Storage, Operating System etc.) just same as or better than the usual physical server/workstation or other relevant data and communication devices. They (virtual Machines VMs) are presented to the user communities and they (users) access them as if they are physical devices as the functionalities and capabilities are exactly the same or even better than respective individual physical devices, in most cases better, depending on the specifications/configurations of the virtual devices. See Depiction in figures 3 and 4.
With Virtualization technology, organizations gain significant capital and operational efficiencies. This is as a result of improved workstation/server utilization and consolidation, dynamic resource allocation and management, workload isolation, security and automation. Virtualization makes on- demand self-provisioning of services and software-defined orchestration of resources easy and possible.
Figure 3: Non-virtualized servers (physical servers) – Acquiring one physical server for each server function
Figure 4: Virtualized Servers – Acquiring just one Physical server on which many virtualized servers are built and deployed.
The VMs seat on the Hypervisor Abstraction layer which rests on the bare metal of the physical Machine (IT devices). Comparing traditional infrastructure and virtualized infrastructure as discussed below, further justifies the industry importance of this positive risk to organizations and entire IT world:
- Traditional Infrastructure (non-virtualized):
- A physical machine has single OS image with non- flexible and scalable specifications such as Processor, Memory etc.
- Most often highly under-utilized, hence costly infrastructure
- Attempts to run many applications, comes with some bottlenecks such as interrupts conflicts, freezing of the processor and the entire machine over time.
- In case of disaster resolution and recovery, it is difficult and far more time-consuming
B. Virtualized Infrastructure:
- Virtual Machines (MV) `s OS and applications are independent of hardware.
- VM carries out specific role/function and it’s optimally utilized.
- Easy, flexible and scalable to provision, recover and relatively faster to resolute issues.
- Faster and quick to setup
Furthermore, Virtualization could take any of the forms such as:
Full Virtualization: the VM and its operating systems is not aware that it is residing in a virtualized environment. The simulated hardware are virtualized and created by the host. Hence the VMs run and operate as if they are independent physical machines both in capability and capacity.
Partial Virtualization: The VMs can run many applications. However its entire operating Systems cannot run wholly in the VM, it could stimulate instance of underlying hardware of the environment. Simply put, the VM consists of independent address space so it is address space virtualization.
Para virtualization: Here the VM is aware that it is residing in a virtualized environment, hence with appropriate driver installed, can issue commands to the host operating system etc. There is explicit and direct communication between the VMs and the Hypervisor to share activity such as in interrupt handling, thread and memory management.
Virtualization as a positive risk, is the cornerstone of cloud computing. Without virtualization technology, cloud computing and its benefits may not have progressed the way it is today. In fact, it may not have seen the light of the day. This is to simply say that, cloud computing was born out of virtualization technology. And it has eliminated most of the bottlenecks associated with traditional computing.
ADVANTAGES OF POSITIVE RISKS
Among numerous advantages of these positive risks to organizations are:
- Huge amount of money (Capital Expenditure - CAPEX) which would have been used to acquire those physical IT devices and their associated power consumptions are saved.
- Management, deployment, control and inspection of the virtual environment are made simple, much easier than the traditional physical servers/ workstations or other IT devices.
- Offers great flexibility and scalability for different environments – production, tests or simulations.
- Substantially made cloud computing reality, simple and flexible to orchestrate and manage.
- STRATEGIES FOR HARNESSING POSITIVE RISKS
Exploit: This strategy is used for risks with positive impacts on the devices where the stakeholders wish to ensure the opportunity is realized. It seeks to eliminate the uncertainty associated with a particular upside risk by ensuring the opportunity definitely happens. For example, engaging a vast expert to configure and administer ICT devices who ensures that all the devices` full potential are utilized and also embraces trends of new technologies including their upgrades in order to proactively minimize any vulnerability and negative risks.
Enhance: This is used to increase probability of occurrence and/or positive impacts of an opportunity. Identifying and Maximizing key drivers of this positive-impact risk may increase the probability of their occurrence. For example, Changing/upgrading the software (Operating systems, application etc.) and hardware of a device will definitely increase the throughput and security.
Share: Prior to sharing, Delphi technique could be used among time-tested experts with relevant experience across different platforms to explore thoroughly ICT systems for positive risks thereof as was fully used during risk identification and risk information gathering. This allows organizations to be in the know of their inherent positive risks associated with these devices and have full conviction and justification of the needs of a particular device before purchase. Sharing a positive risk involves allocating some or all of the ownership of the opportunity to a third party who is best able to capture the opportunity for the benefit of the stakeholders. For example, forming a risk-sharing partnership, teams or joint ventures can be established with express purpose of taking advantage of the opportunity so that all stakeholders gain from their actions.
Accept: Accepting an opportunity is being willing to take advantage of the opportunity if it arises but not practically pursuing it.
STRATEGIES FOR HANDLING NEGATIVE RISKS
There are three main strategies used to deal with threats that may lead to compromise of data/ information integrity, availability and confidentiality by exploiting the vulnerabilities in the devices; if they occur are:
Risk Avoidance: This strategy is used where the risk impact is high. The stakeholders act to eliminate the threats. The most radical avoidance strategy is to shut down the devices or disconnect them from network. This may prompt the stakeholders to consult the manufacturers for immediate solution, if there is no other alternative.
Risk Transfer: Here, the stakeholders shift the impact of the threat to a third party and ownership of the responsibility by use of insurance, warranties, guarantees etc.
Risk Mitigate: In this strategy, stakeholders act early to reduce the probability of occurrence or impact of a risk. Thereby making the risk to be within acceptable threshold.
Risk Acceptance: This is used for Negative and Positive risks. In this scenario, stakeholders decide to acknowledge the risks and take no action unless the risk occurs. However this strategy provides room for periodic reviews of the threats to ensure that the risk does not change significantly. This also happens for the risks under close monitoring such as those in the risk registers. For the positive risks, the stakeholders having been made aware of the inherent opportunities of the devices may decide not to exploit and utilize them, thereby sticking to traditional use of the devices.
Every ICT device has one or more positive risks but needs to be discovered and activated. Once discovered, it is also advisable to seek expert judgement and Original Equipment Manufacturers (OEMs) confirmation. With these, it is crucial also to seek services and /or advice of Subject Matter Experts (SMEs) with relevant hands-on experience and conversant with trends of technologies/security. This ensures all associated positive risks are detected and optimally utilized. Furthermore, since attacks assume dynamic forms, it is advisable to charge network Engineers , systems administrators and Experts to make thorough risk assessment a daily routine and carry out aggressive end-users awareness campaign on what to do once they sense any vulnerability that could lead to devices compromise. This is important as Risk assessment is the duty of all stakeholders hence the All stakeholders are enjoin to ensure consistent two-way communication and consultation. The overall objective is to minimize (if not totally eliminate) likelihood of occurrence and/or exploitation of negative risks and maximize likelihood of exploitation of positive risks inherent in the devices. This assures full utilization of the devices which would at the end ensure fair return on investment (ROI) and justification of their purchase.
- Stoneburner G., Goguen, A. & Feringa, A. (2002). Risk Management Guide for Information Systems. Retrieved January 4, 2015 From http://csrc.nist.gov/publications/ nistpubs/800-30/sp800- 30.pdf.
- Alshboul, A. (2010). Information Systems Security measures and countermeasures: Protecting Organizational Assets from malicious Attacks, 3, Article 486878. Journals on Communications of the IBIMA, 2010, 1-9. Retrieved March 15 2015 from http://www. ibimapublishing.com/journals/CIBIMA/2010/486 878/486878.pdfJ.
- Manes C. (2014). The 21 most common misconfigurations that will come back to haunt you!Retrieved March 20 2015 from http://www.gfi.com/blog/the-21-most-common-misconfigurations-that-will-come-back-to-haunt-you/
- Pascucci M. (2012). Network Security Horror Stories: Router Misconfigurations. Retrieved March 22 2015 from http://blog.algosec.com/ 2012/09/network-security-horror-stories- router-misconfigurations.html
- IRS Office of Safeguards Technical Assistance Memorandum Protecting Federal Tax Information (FTI) Through Network Defense- in-Depth. Retrived April 20, 2015 from https://www.irs.gov/pub/irs-utl/protecting-fti-throughnetworkdefense-in-depth.doc.
- Cisco Press(2014). Cisco Networking Academy's Introduction to Basic Switching Concepts and Configuration. Retrieved June 20,2015 from http://www.ciscopress.com/ articles/article.asp?p=2181836&s eqNum=7
- Valsamakis, A. C., (2003). Risk Management. Heinemann Higher and Further Education (Pty) Ltd. Sandton.
- PMI. (2012). A Guide to the Project Management Body of Knowledge (PMBOK) Fifth Edition
- Sabrina, M. (2014). Dell Model years. https:// kb.wisc.edu/education/page.php?id=44855
- Young, P.V. (2013). Observation Technique Definition, Principles and Validity. http:// www.studylecturenotes.com/social-research- methodology/observation-technique- definition-principles-validity.
- Cisco press(2017). Security Configuration Guide, Cisco IOS XE Release 3SE (Catalyst 3850 Switches)
- Intel Virtualization Technology (Intel VT) https://www.intel.com/content/www/us/en/virtualization/virtualization-technology/intel- virtualization-technology.html
- Sivasubramanian, B.,Frahim E., & Froom R.(2010). Analyzing the Cisco Enterprise Campus Architecture. Cisco Press http:// www.ciscopress.com/articles/article.asp?p=1608131
- Bhaiji,Y.(2008). Cisco Press – Security Features on switches. http://www.ciscopress. com/articles/article.asp?p=1181682&seqNu m=4