Du

Horaire

SDD, Thèses et HDR

Intelligent Task Offloading and Resource Allocation in 5G-Enabled Multi-Access Edge Computing for the Tactile Internet

Soutenance de thèse. Encadrante de thèse: Tara Yahiya, enseignante-chercheuse Université Paris-Saclay, LISN.

Orateur : Yeabsira Asefa ASHENGO

Jury

  • * Fabrice Valois, Professeur des Universités, INSA de Lyon Rapporteur (rapporteur)
  • * Moez Esseghir, MCF-HDR, Université de technologie de Troyes (rapporteur) 
  • * Veronique Vèque, Professeure, Université Paris Saclay (examinatrice)
  • * Pedro Veloso Braconot, MCF-HDR, CNAM de Paris(examinateur)
  • * Tara Yahiya, MCF-HDR, Université Paris Saclay, directrice de thèse
  • * Nicola Roberto Zema, MCF, Université Paris Saclay, co-encadrant de thèse

Abstract

This thesis focuses on improving the quality of service (QoS) of users equipments (UEs) that process tactile data in a 5G network. These user equipments (UEs) cooperate with a Multi-access Edge Computing (MEC) server near the Base station to achieve an ultra-reliable low-latency (URLL) communication and task computation. The UEs are low computation devices compared to the MEC server and they process as tasks sentences from braille documents in the form of tactile data.

In the scenario envisioned, the UEs have different computational capacity and communication resources and they process different number of tasks. These tasks have different priorities, sizes, computational requirement and dependencies on each other. Given the heterogeneity of the environment and the lack of sufficient computational capacity at the UEs, an adaptive task offloading and resource allocation mechanism is required to achieve URLL communication and task computation.

An asynchronous Meta Reinforcement Learning (MRL) approach is implemented for the tactile task offloading between the UEs and MEC server. The approach includes an algorithm that prioritizes and organizes UE tasks based on their importance and interdependencies, and a Deep asynchronous MRL algorithm models the task offloading policy. The asynchronous MRL algorithm updates meta model parameters asynchronously enhancing adaptability in a heterogeneous environment. The algorithm has an overall improved performance in terms of training latency, energy consumption, and memory and CPUCognition Perception et Usages time usage compared to a benchmark.

The resource allocation algorithm was implemented using a Deep Deterministic Policy Gradient (DDPG) based RL algorithm for a continuous action space. The Model enables the MEC server to choose the optimal amount of resource to assign to each UE in terms of energy consumption and computation delay. 

Videoconference link available here