AbstractCloudcomputing environment supports highly scalable hardware and software resourcesharing platform through the Internet. The cloud provider shares the hardwareresources with the cloud customers through the Virtual Machines (VM). VirtualMachines (VM) running on the same physical server are denoted as Co-residentVMs. The Co-resident Virtual Machines are logically isolated from each other.The logical isolation is violated by the side channels of the malicious users.
The sensitive information from the Co-resident VMS are accessed by themalicious users is defined as Co-resident attacks. The Cryptographic keys,workloads and web traffic rates are the sensitive information accessed by themalicious users. The Co-resident attackis also referred as co-location, co-residence or co-residency attacks.TheVirtual Machine allocation policy is used to place the Virtual Machines on thephysical server.
The malicious user co-locates their VM to the target VM. Thesecurity, workload balance and power consumption parameters are considered inthe Virtual Machine placement process. Secure metrics are defined to measurethe safety of the VM allocation policy. The Balanced VM Allocation Policy isbuilt to assign VMs to the physical servers.
The Previous Selected Server First(PSSF) policy is used with security metrics. Least VM allocation policy, MostVM allocation policy, and Random allocation policy are used with the workloadbalance parameter. The data centers are connected to the Virtual Machineswithin the same environment.Theattack resistant Virtual Machine Management framework is built with centralizedand distributed scheduling schemes. The live VM migration is protected from theside channel attacks. The system is enhanced with multiple data centermanagement mechanisms.
The Distributed VM Placement (DVMP) policy is build toallocate the virtual machines on the physical server. Index Terms: CloudResources, Virtual Machine Allocation Policies, Side Channel Attacks,Co-residential Attacks and Distributed Scheduling 1. IntroductionPublicinfrastructure-as-a-service (IaaS) clouds enable the increasingly realisticthreat of malicious customers mounting side-channel attacks. An attackerobtains tenancy on the same physical server as a target and then uses thecareful timing of shared hardware components to steal confidential data.Damaging attacks enable theft of cryptographic secrets by way of sharedper-core CPU state such as L1 data and instruction caches, despite customersrunning within distinct virtual machines (VMs).Ageneral solution to prevent side-channel attacks is hard isolation: completelyprevent sharing of particular sensitive resources. Such isolation can beobtained by avoiding multi-tenancy, new hardware that enforces cache isolation,cache coloring, or software systems such as StealthMem. Hard isolation reducesefficiency and raises costs because of stranded resources that are allocated toa virtual machine yet left unused.
Another approach has been to prevent attacksby adding noise to the cache. For example, in the D¨uppel system, the guestoperating system protects against CPU cache side-channels by making spuriousmemory requests to obfuscate cache usage. This incurs overheads, and alsorequires users to identify the particular processes that should be protected.Afinal approach has been to interfere with the ability to obtain accuratemeasurements of shared hardware by removing or obscuring time sources. This canbe done by removing hardware timing sources, reducing the granularity of clocksexposed to guest VMs, allowing only deterministic computations, or usingreplication of VMs to normalize timing. These solutions either have significantoverheads, as in the last solution or severely limit functionality forworkloads that need accurate timing.
Inaddition to sharing resources and having access to fine-grained clocks,shared-core side-channel attacks also require the ability to measure the stateof the cache frequently 10. For example, Zhang et al.’s cross-VM attack onElGamal preempted the victim every 16 ?s on average. With less frequentinterruptions, the attacker’s view of how hardware state changes in response toa victim become obscured.
Perhaps surprisingly, then, is the lack of anyinvestigation of the relationship between CPU scheduling policies and sidechannel efficacy. In particular, scheduling may enable what we call softisolation: limiting the frequency of potentially dangerous cross-VMinteractions. 2. Related workAvariety of cross-VM side channels have been demonstrated in the academic literature.Deficiencies in performance isolation, similar to those leveraged in this work,have been exploited for a variety of purposes 7.
Noting that cache andnetwork utilization are often contested between VMs, a resource freeing attack(RFA) has been proposed that allows a greedy customer to manipulate theperformance of co-resident VMs by shifting their resource bottlenecks 4. Thiswork operates under a similar attack model as our own, targeting public networkcloud services and manipulating VMs from a helper host. However, where RFA is aperformance enhancement strategy for the cloud, co-resident watermarking is amethod of information extraction.
Cache-basedside-channel attacks, in which timing differences in access latencies betweenthe cache and main memory are exploited, have attracted the most attention forcloud computing. Most notably, Zhang et al. 6 demonstrated that the machineinstructions of a co-resident VM can be recovered from shared L1 caches,permitting the reconstruction of secret keys in the circumstance that theyinfluence the code path of a decryption routine.
Ristenpart et al. showed thatcache usage can be examined as a means to measure the activity of otherinstances co-resident with the attacker. Furthermore, they demonstrated thatthey can detect co-residency with a victim’s instance if they have informationabout the instance’s computational load. In contrast, Zhang et al.
5 utilizedcache-based side channels as a defensive mechanism. Their scheme works bymeasuring cache footprints for evidence of other VMs. Leveraging this scheme,they can challenge correct functionality on the part of the cloud provider anddiscover other unanticipated instances sharing the same host.Bowerset al. 8 have proposed the use of a different network timing side channel inorder to challenge fault tolerance guarantees in storage clouds. This workmeasures the response time of random data reads in order to confirm that agiven file’s storage redundancy meets expectations. This scheme can be used todetect drive-failure vulnerabilities and expose cloud provider negligence.
Weintend to investigate the applicability of storage cloud co-residentwatermarking in future work.Rajet al. proposed two other mechanisms for preventing cache-based side channels,cache hierarchy aware core assignment, and page-coloring-based cachepartitioning. The former groups CPU cores based on last level cache (LLC)organization and checks whether such organization has any conflict with the SLAof the clients. The latter is a software method that monitors how the physicalmemory used by applications maps to cache hardware, grouping applicationsaccordingly to isolate clients. Another effective defense against cache-basedside channels is changing how caches assign memory to applications, such asnon-deterministic caches.
Non-deterministic caches control the lifetime ofcache items. By assigning a random decay interval to cache items, the cachebehavior becomes nondeterministic, and hence, side channels cannot exploit it.Work in performance isolation in Xen can also lead to added security benefits.Otherwork aims to combat virtualization vulnerabilities by reducing the role andsize of the hypervisor.
Most drastically, Keller et al. 2 eliminate a largeattack surface by proposing the near elimination of the hypervisor. This isachieved through pre-allocation of resources, limited virtualized I/O devices,and modified guest operating systems. While this approach inarguably reducesthe likelihood of exploitable implementation flaws in the virtualization codebase, it necessarily places VMs closer to the underlying hardware. Intuitively,this can only increase the bandwidth of the isolation-compromising side channelthat we explore in this work. Other proposals reduce the hypervisor attacksurface by considering only specific virtualization applications such asrootkit detection or integrity assurance for critical portions ofsecurity-sensitive code 3 or by distributing administrative responsibilitiesacross multiple VMs 10.
We do not consider these systems in our work becausethey are not intended for the third-party compute cloud model. 3. Virtual Machine Allocation Policies Security is one of the major concerns against cloud computing.
Fromthe customer’s perspective, migrating to the cloud means they are exposed tothe additional risks brought about by the other tenants with whom they sharethe resources?are these neighbors trustworthy, or they may compromise theintegrity of others? This paper concentrates on one form of this securityproblem: the co-resident attack. Virtual machines (VM) area commonly used the resource in cloud computingenvironments. For cloud providers, VMs help increases the utilization rate ofthe underlying hardware platforms. For cloud customers, it enables on-demandresource scaling and outsources the maintenance of computing resources.However, apart from all these benefits, it also brings a new security threat.In theory, VMs running on the same physical server are logically isolated fromeach other. In practice, nevertheless, malicious users can build various sidechannels to circumvent the logical isolation, and obtain sensitive informationfrom co-resident VMs, ranging from the coarse-grained, e.g.
, workloads and webtraffic rates to the fine-grained, e.g., cryptographic keys For cleverattackers, even seemingly innocuous information like workload statistics can beuseful 9. For example, such data can be used to identify when the system ismost vulnerable, i.e., the time to launch further attacks, such asDenial-of-Service attacks.
A straightforward solution to this novel attack is to eliminate theside channels, which has been the focus of most previous works. However, mostof these methods are not suitable for immediate deployment due to the requiredmodifications to current cloud platforms. In our work, we approach this problemfrom a completely different perspective. Before the attacker is able to extractany private information from the victim, they first need to co-locate their VMswith the target VMs. It has been shown that the attacker can achieve anefficiency rate of as high as 40%, which means 4 out of 10 attacker’s VMs canco-locate with the target. This motivates us to study how to effectivelyminimise this value.
From a cloud provider’s point of view, the VM allocationpolicy (also known as VM placement?we use these two terms interchangeably inthis paper)is the most important and direct control that can be used toinfluence the probability of co-location. Consequently, we aim to design asecurity policy that can substantially increase the difficulty for attackers toachieve co-residence.Inour earlier work, we have proposed a prototype of such a security policy,called the previous-selected-server-first policy (PSSF). However, thisprototype policy only focuses on the problem of security, and hence has obviouslimitations in terms of:1. Workload balance?Workload here refers to the VM requests.
From thecloud provider’s point of view, spreading VMs among the servers that have already been switched on can helpreduce the probability of servers being over-utilized, which may cause SLA(service level agreement) breaches. From the customer’s perspective, it is alsopreferable if their VMs are distributed across the system, rather than beingallocated together on the same server. Otherwise, the failure of one serverwill impact all the VMs of a user.2. Power consumption?It has been estimated that the power consumptionof an average datacentre is as much as 25,000 households and it is expected todouble every 5 years. Therefore, managing the servers in an energy efficientway is crucial for cloud providers in order to reduce the power consumption andhence the overall cost. This has also been the focus of many previous works.
Inthis paper, we take all three aspects of security, workload balance, and powerconsumption into consideration to make PSSF more applicable to existingcommercial cloud platforms. Since these three objectives are conflicting tosome extent, we improve our earlier policy by applying multi-objectiveoptimisation techniques. In addition, we have implemented PSSF on thesimulation environment CloudSim as well as on the real cloud platform OpenStackand performed large-scale experiments that involve hundreds of servers andthousands of VMs, to demonstrate that it meets the requirements of all threecriteria.Specifically,our contributions include: (1) we define secure metrics that measure the safetyof a VM allocation policy, in terms of its ability to defend againstco-resident attacks; (2) we model these metrics under three basic but commonlyused VM allocation policies, and conduct extensive experiments on the widelyused simulation plat-form CloudSim to validate the models; (3) we propose a newsecurity policy, which not only significantly decreases the probability ofattackers co-locating with their targets but also satisfies the constraints inworkload balance and power consumption; and (4) we implement and verify theeffectiveness of our new policy using the popular open-source cloud softwareOpenStack as well as on CloudSim. 4. Issues on VM Allocation Policies The Virtual Machine allocation policy is usedto place the Virtual Machines on the physical server. The malicious userco-locates their VM to the target VM.
The security, workload balance and powerconsumption parameters are considered in the Virtual Machine placement process.Secure metrics are defined to measure the safety of the VM allocation policy.The Balanced VM Allocation Policy is built to assign VMs to the physicalservers. The Previous Selected Server First (PSSF) policy is used with securitymetrics. Least VM allocation policy, Most VM allocation policy, and Randomallocation policy are used with the workload balance parameter. The datacenters are connected to the Virtual Machines within the same environment.
Thefollowing issues are discovered from the current virtual machine allocationpolicies against co-residential attacks. • The system supports centralizedallocation policy only• Live VM migration is not protected• Multiple data center management isnot supported• The system state information isrequired for the scheduling process 5. Distributed Scheduling and Virtual Machine ManagementFramework Thevirtual machine placement operations are performed with centralized anddistributed manner. The virtual machines are placed with workload and energylevels.
The data center selection process is used to detect the suitable datacenter for the workloads. The system is divided into six major modules. Theyare Physical server deployment, Data center management, Workload controller,Security Analyzer, Centralized VM placement and Distributed VM placement.Thephysical servers and virtual machines are configured I the deployment process.The data center management is build to organize the data centers and shareddata items. The workload controller is used to collect workloads from theusers. The security analyzer is built to estimate the security metrics for theVMs.
The centralized VM placement is employed to control co-resident attacks.Live VM migration and multiple data center based allocation is performed underthe distributed VM placement model.5.
1. Physical Server DeploymentThephysical server deployment is used to set up the cloud with shared resources.The physical servers and configuration levels are collected from the cloudprovider. The virtual machine configurations are assigned with provider choice.
The physical server and virtual machine association levels are updated underthe deployment process.5.2. Data Center Management Thedata centers and its contents are maintained under the data center management.Data center storage and usage levels are monitored at intervals.
Shared dataand its request frequency are maintained in the data center. Virtual machines and data center communication is controlled withsecurity levels. 5.3. Workload ControllerTheworkload controller monitors the workload execution process.
The workloads arecollected from the cloud users. The workloads and data association are verifiedby the controller. The workload status is monitored and updated by thecontroller.5.
4. Security AnalyzerThesecurity analyzer is used to estimate the security levels for the virtualmachines. The secure metrics are used to estimate the security levels. Theworkload balance parameter is considered in the security metrics. The powerconsumption levels are estimated in the secure metric estimation process.5.5. Centralized VM PlacementThecentralized VM placement is carried out with the balanced VM allocation policy.
The Previous Selected Server First (PSSF) algorithm is used for the VMplacement process. The single data center is used to provide the data values.The secure metrics are used in the VM placement process.5.6. Distributed VM PlacementThesecure metrics estimation and initial VM placement operation are tuned for thedistributed scheduling model. The Distributed VM placement algorithm is used toallocate the virtual machines in a distributed manner.
The Datacenterallocation algorithm is used to select data centers for the virtual machines.The live VM migration operations are secured with the Attack resistant VMmigration algorithm. 6.ConclusionCloudcomputing environment provides IT resources to the users based on their demand.The Co-resident attacks are raised by the co-located malicious users with sidechannels.
The Previous Selected Server First (PSSF) policy is applied for theVM placement with security. The attack resistant VM placement is built withcentralized and distributed scheduling, Live VM migration and Multiple datacenter management. The Virtual Machine placement operations are carried outwith side channel attack control mechanism.
The VM placement policies areimproved to manage centralized and distributed placement models. The co-locationcontrol model is tuned to handle the VM migration tasks. Data centercommunications are also protected in the allocation scheme.