Abstract:
In recent times, Multi-Access Edge Computing (MEC) is showing much promise as the preferred application service provisioning model to facilitate convenient access to services for mobile users. The central idea of MEC is to have service providers deploy application services as containers on MEC servers located near mobile base stations. User service invocations are typically routed to, and served from nearby MEC servers on their route as they move around, with improved latency and turnaround times. This provisioning model is increasingly being acknowledged as a near-user low latency convenient alternative to traditional cloud computing. Driven by new innovations in MEC, the number of application services (e.g. Object recognition, obstacle identification, navigation, maps, games, e-commerce etc.) hosted at edge servers to be used by mobile users (e.g. autonomous vehicles, drones, users on the move) is also growing at a considerable pace.
A typical MEC deployment involves the orchestration of a number of policies for several tasks like service allocation, service placement, service migration, service replication, user management and resource scheduling. These policies coordinate as a whole to ensure low latency access and continued availability to end users. Indeed, designing MEC policies taking into consideration different scenarios and optimization metrics is an active area of research in recent times. However, most design approaches proposed in recent literature either resort to traditional optimization techniques to ensure optimality, thereby limiting scalability or resort to learning based approaches without any formal guarantees on the synthesized policies. The objective of this thesis is to leverage formal methods for the design and verification of MEC policies to aid scalability and provide formal guarantees on their performance.
A key challenge in MEC is to devise an efficient service placement policy which determines the availability of application services on MEC servers. This is a non-trivial challenge, considering the fact that typical edge servers are not as resource-equipped as their cloud server counterparts -- thus, the trivial deployment of hosting all application services at all edge servers is infeasible. This necessitates judicious planning and allocation of the service-server-user mapping over time as different service requests come in from different users. This is aggravated by the fact that users accessing these application services are typically mobile, moving in and out of the service zones of different edge servers. This often necessitates service migrations to preserve continuity of service provisioning and a steady acceptable Quality of Experience (QoE) and user perceived latency. While a number of researchers have addressed the service placement and allocation problem, they considered traditional monolithic applications where all services of an application are deployed as a single container. Microservice based applications, on the other hand, are split into a set of interacting microservices with each microservice independently deployed. Service placement and allocation for microservice based applications thus add further intricacies to the service placement and service allocation landscape. To address this challenge, we design a user service allocation policy for microservice based applications and demonstrate the benefits of prefetching microservices to improve user perceived latency.
MEC servers comprise co-located heterogeneous applications each with individual latency requirements. To improve load balancing, a number of application replicas are deployed on MEC servers, automatically spawned or retracted by an auto-scaling policy. As new service requests arrive, we have a challenging task of deciding whether to spawn a new thread in an already existing container with added resource contention and possible application latency violations or create a new service container instance to reduce contention amongst resources with lower user perceivable latency. We propose the design of an auto-scaling policy that automatically determines when to retain/add/remove container instances of an application while at the same time ensuring that the probability of latency violations is minimized.
State of the art MEC policies work oblivious to the presence of MEC server failures. In the event of a failure, a fault-recovery policy ensures service continuity by re-initializing application containers that were deployed on MEC servers prior to failure. Fault Tolerance approaches in pervasive computing environments typically deal with faults in an adhoc manner with no application specific distinction, executing recovery decisions with each occurrence of failure. To cater to the real time MEC ecosystem, we propose a prioritized approach wherein we leverage Probabilistic Model Checking to quantify the potential impact of multiple server failures, application priority and potential resource contention.
While designing MEC policies has been the centrepiece of focus in MEC literature, a much less explored avenue is verification of policies, that is important for establishing if a given policy conforms to a desired performance specification. An additional contribution of this thesis is a framework for modeling and verification of MEC policies. Traditional approaches to performance verification either develop analytical models and derive performance bounds mathematically on each performance metric under consideration or resort to simulation resulting in inadequate representation of non-deterministic behaviour. In contrast, in this thesis, we adopt a formal modeling and verification approach, offering accurate representation of non-determinism quantifying its impact on all possible executions of the model. Our framework is able to deduce quantitative guarantees on policy performance under varying request distributions.
We validate our proposed approaches on publicly available benchmark datasets to demonstrate the effectiveness of our proposals. We believe that our work will open up a lot of new research directions in different aspects of MEC through the formal methods lens.