Ubi-Flex-Cloud: ubiquitous flexible cloud computing: status quo and research imperatives

Purpose – Cloud computing originated in central data centers that are connected to the backbone of the Internet.Thenetworktransporttoandfromadistantdatacenterincurslonglatenciesthathindermodernlow-latencyapplications.Inordertoflexiblysupportthecomputingdemandsofusers,cloudcomputingisevolving towardacontinuumofcloudcomputingresourcesthataredistributedbetweentheendusersandadistantdatacenter.Thepurposeofthisreviewpaperistoconciselysummarizethestate-of-the-artintheevolvingcloudcomputingfieldandtooutlineresearchimperatives. Design/methodology/approach – The authors identify two main dimensions (or axes) of development of cloudcomputing:thetrendtowardflexibilityofscalingcomputingresources,whichtheauthorsdenoteas Flex-Cloud , and the trend toward ubiquitous cloud computing, which the authors denote as Ubi-Cloud . Along these two axes of Flex-Cloud and Ubi-Cloud, the authors review the existing research and development and identify pressing open problems. Findings – TheauthorsfindthatextensiveresearchanddevelopmenteffortshaveaddressedsomeUbi-Cloud andFlex-Cloudchallengesresultinginexcitingadvancestodate.However,awidearrayofresearchchallenges remains open, thus providing a fertile field for future research and development. Originality/value – This review paper is the first to define the concept of the Ubi-Flex-Cloud as the two-dimensional research and design space for cloud computing research and development. The Ubi-Flex-Cloud concept can serve as a foundation and reference framework for planning and positioning future cloud computing research and development efforts.


Introduction
Cloud computing has heralded tremendous advances in applied computing and informatics around the world [1,2].Modern societies depend on reliable secure cloud computing for a wide range of critical functions, including education through virtual learning environments [3][4][5] and health care [6][7][8], both for classical health care topics, such as heart health [9], as well as newly emerging diseases, such as Covid-19 [10][11][12][13].Moreover, the influence that social media exert on people in conjunction with advanced cloud computing models enable sophisticated cyber influence campaigns for a wide range of purposes, ranging from public health awareness to military conflicts [14][15][16][17][18]. Also, the ongoing roll-out of fifth generation wireless systems (5G) will enable a new range of use cases that require low-latency communication and compute processing [19,20], e.g. for tactile internet and human-in-the-loop systems [21,22].
Effective cloud computing is a key enabler for this wide range of societal functions and therefore deserves close attention and further research and development so as to broadly support the advance of civilization.Past cloud computing research and development has mainly focused on reliably and efficiently providing vast computing resources in large data centers.While large data centers will likely remain important and should be further optimized, we identify two newly emerging dimensions of cloud computing research that will likely become highly important in the near-to mid-term future in the applied computing and informatics domain: Flexibility and Ubiquity.With flexibility, which we refer to as Flex-Cloud, we mean the flexibility to scale the capabilities of a given cloud computing system, e.g. to scale from a small-scale private cloud to a large-scale public cloud, as well as the flexibility to scale the performance and reliability of a given cloud computing system by varying the boundary of software vs. hardware based computing.
With ubiquity, which we refer to as Ubi-Cloud, we mean the continuous cloud computing support of end-user applications and end devices that are mobile across a wide range of varying spatial locations and with a wide range of network connectivities, which are often based on wireless communication.Low-latency cloud computing support is often vital for these mobile applications that may support a wide range of critical tasks, e.g. the control of autonomous vehicles or industrial production plants [19][20][21][22].
Cloud computing has been surveyed from a wide range of perspectives.Overviews of the basic principles and terminologies of cloud computing have been provided in [23,24], while the perspective of fog computing has been covered in [25].Scheduling mechanisms for cloud computing have been surveyed in [26,27] and related load balancing mechanisms have been surveyed in [28,29].General nature-inspired optimization mechanisms for cloud computing have been surveyed in [30].The communications technologies enabling cloud computing have been covered in [31], while other surveys have covered mechanisms related to security [32], fault tolerance mechanisms [33] and energy efficiency [34].A few surveys have covered specific cloud computing application domains, such as health care [35,36] and the Internet of Things (IoT) [37].Our review paper is orthogonal to the existing cloud computing review and survey articles in that we focus on the aspects of flexibility and ubiquity in the cloud computing services, which to the best of our knowledge have not previously been covered.
This review paper presents two prominent focus areas of the Flex-Cloud concept, namely the flexible scaling of computing in private and public clouds in Section 2 as well as hardware to software flexibility in Section 3. Next, Section 4 covers the Ubi-Cloud concept of cloud computing across the network edge region, in the physical vicinity of the end-users.Section 5 covers the cloud computing support mechanisms specifically for end-user mobility.Each section describes the current state of the art, and outlines research imperatives for the further development of the respective dimensions of applied cloud computing.Overarching conclusions and future research directions are provided in Section 6.
2. Flex-Cloud: scaling of computing in private and public cloud 2.1 Background and review of existing approaches This section focuses on the Flex-Cloud concept of scaling a given cloud computing system for a given organization or set of computing tasks; whereby the scaling occurs "in-place" in the sense that mobility or edge networks are not considered in this section.Rather, this section focuses on approaches for flexibly scaling the computing power of a given cloud computing system up and down, as well as approaches to flexibly utilize either private or public clouds, both with flexibility for the scaling of the computing power as well as other vital metrics, such as availability and cost.
The need for a Flex-Cloud dimension in cloud computing originates from the rapid changes of the needs for applied computing and information technology (IT) support in today's typical organizations.Organizations may grow, re-structure, or shrink and the cloud computing infrastructures and platforms should continuously support the development and operations in an organization throughout such changes.Dynamic changes in an organization may imply changing requirements for a wide range of applied computing resources, such as on-demand virtual machines (VMs), development platforms and production platforms.In order to satisfy these needs for flexibility, cloud computing infrastructures that are resilient and elastic should be available on-demand.While cloud computing as a fundamental concept can in principle be configured to provide low-cost, elastic platforms for development and operations tasks [2,24,38], doing so flexibly and over a wide range of scales still poses significant challenges.
One important aspect of the Flex-Cloud dimension is the flexible scaling from private cloud computing to public cloud computing, and vice versa.Traditional public clouds, such as Amazon Web Services (AWS) and Microsoft Azure, are proprietary black-box clouds that are provided by distant data centers that scale to enormous sizes [39].While these public clouds can provide excellent reliability and elastic scaling of the subscribed cloud computing resources, they force users to relinquish full control over the data that are to be computed on.However, some data-related processes in an organization may require that the cloud computing is conducted on-site, e.g.due to compliance requirements for on-site data retention.Also, concerns about ease of administration with full control of the specifics of the data warehousing and processing may lead to a desire for operating a private cloud system on-site.
Recent research has resulted in management frameworks that employ the OpenStack platform for flexibly provisioning private cloud systems [40][41][42][43][44][45].OpenStack controls different types of resources that are typically represented as nodes to provide cloud services.For instance, the OpenStack compute service Nova permits the creation of VM instances on demand [46].These VMs can then be utilized to provide Infrastructure as a Service (IaaS) or Platform as a Service (PaaS) for various departments in an organization.
Aside from these technology aspects there are important economic considerations related to the flexible scaling of cloud computing [47,48].Generally, a sharing economy of computational resources can be achieved by a cloud service provider that serves the aggregate of computing demands from a collection of users (customers).Further, cooperation between cloud service providers that jointly decide on federation policies can maximize the total federation profit [49].The economies of scale achieved by large cloud service providers or the cooperation of cloud service providers generally drive down the unit cost of computing [50].From a user perspective, narrow considerations of the rates that are charged by cloud service providers make the offloading of specific services, such as e-mail services [51], medical record keeping [52], or educational services [53], appear to be quite cost-effective, especially over short time horizons, e.g.1-3 years [54], and if complex regulatory requirements are considered [55].
However, these considerations of the economies of scale of the charging rates do not necessarily mean that outsourcing the computing to a distant public cloud service is the best Ubiquitous flexible cloud computing solution for any enterprise from an economic perspective.Public cloud services may be the best option at some point in an enterprise's lifetime and under specific market conditions [56].Detailed cost analyses of the total cost of ownership (TCO) of cloud computing services versus on-premise computing over long time horizons of 4-10 years indicate that depending on the usage scenarios, offloading to a remote cloud service may cost significantly more than keeping the computation services on a local on-premises cloud [54,57].

Future research and development directions
While the recent research on private clouds has provided flexible scaling mechanisms for a given private cloud system, the transition and inter-operation between private and public clouds is in its infancy [58,59].Future research needs to examine interoperability mechanisms that permit for a well-controlled seamless inter-operation of private and public clouds.The management mechanisms for scaling up from a private cloud to a public cloud and, in reverse from public cloud to a private cloud need to be thoroughly studied.Also, high levels of availability as well as privacy and security play increasingly important roles both in private clouds, and in inter-operating private and public clouds.Future research needs to examine strategies for ensuring high availability, e.g.clustering strategies.Furthermore, strategies for ensuring the security and privacy of documents and data to the highest levels of trustworthiness and safeguarded against a multitude of malicious attacks [60][61][62][63] while complying with applicable regional regulations, e.g. in Europe [64], need to be researched in detail.
In addition, flexible massive data processing capabilities with high levels of availability are required for the emerging digital twin (DT) concept.A DT is an integrated multiphysics, multiscale and probabilistic simulation of a system that uses high-fidelity physical models, sensor updates and historic data [65].The twinning process is supported by the continuous interaction, communication and synchronization between the DT with respect to the physical (real-life) twin and its surrounding physical environment [66].High-fidelity physical DT models require massive data processing and high availability that likely require the seamless inter-operation of public and private clouds.
A core principle of today's cloud computing is ubiquity and availability, regardless of the underlying communication networks.However, if mission-critical and latency-sensitive constraints, as well as the quality of experience and resilient cloud service requirements [67] are not met, there is an economic loss.Therefore, a hybrid cloud architecture envisioned in [68] with software-defined intelligence, e.g.dynamic workload aggregation and network capacity planning, could be a promising option for future cloud designs.Note that the cloud economics are complex due to several parameters, e.g.performance, dedicated/shared resources, business agility, business resilience and business strategies.A three-tier market model of marketplace users and cloud providers has strived to model these complexities [69].However, comprehensive studies are needed to understand: 1) how profitable are software as a service (SaaS) providers that shoulder more computing management responsibilities compared to PaaS or IaaS providers [70, 71], 2) how can the interplay among these different cloud service paradigms in terms of provider profitability and performance be comprehensively modeled, and 3) how can a customer optimally trade off the TCO as well as the performance levels and availabilities of features of these cloud service paradigms versus on-premise computing.
3. Flex-Cloud: software vs. hardware based computing 3.1 Background and review of existing approaches This section focuses on the Flex-Cloud concept of scaling from the computing of functions on general-purpose computers in software to the computing with hardware acceleration, including the transition in between these two computing paradigms.While large data centers are typically based on the computing in software on general-purpose compute servers, smaller cloud systems that may be faced with specialized tasks are increasingly considered for hardware acceleration.For instance cloud computing systems in an edge computing setting may be tasked with highly demanding specific functions that relate to the processing of wireless communication signals.The aspects of ubiquity of edge computing settings are the focus of Section 4. The present section focuses on the flexible scaling of the computing in a given cloud computing system, which may be operating at a specific location in an edge computing setting, from software to hardware based computing, and vice versa.
An extensive set of recent studies have explored strategies for accelerating the computation of a variety of specific functions, e.g.functions relating to communication signals and neural networks, on general-purpose computers [72,73].The existing studies have mainly focused on strategies for accelerating isolated specific aspects of central processing unit (CPU) processing as well as memory accesses and input/output to the computing platforms and infrastructures.

Future research and development directions
As data computing loads arrive typically via packet-switching communication networks to the cloud computing nodes, future research needs to examine how to interface the flexible range of software and hardware based computing processing approaches with high-speed low-latency data packet input-output frameworks.Recent fast packet processing frameworks are typically based on data plane development kit (DPDK) as well as eXpress data path (XDP) and extended Berkeley Packet Filter (eBPF) techniques to speed up the input and output of data packets from the network interfaces as well as the data packet processing in software [74][75][76][77][78]. Future research needs to find flexible ways to interface data packets rapidly with both conventional software processing modules as well as hardware acceleration modules.Also, compression techniques for reducing the overhead of the packet protocol headers [79] should be integrated into the flexible novel high-speed data packet processing frameworks.
Traditional cloud computing has always been about meeting the application demands, and to this end, the over-provisioning of resources, replication of data and stand-by operations have been standard techniques to meet the service level agreements (SLAs).An important direction that has emerged recently is to optimize the overall transactions in the data centers and cloud-native functions to improve the energy efficiency.Generally, computing in hardware is more energy efficient than computing in software [72,73].Future research needs to develop and evaluate green energy computing and power saving techniques that account for the energy consumption of hardware and software computing, and these energy-saving aspects could become components of future "green SLAs".
While cloud computing in a central data center provides large scale flexibility, edge cloud computing is targeted toward low-latency and power-efficient approaches due to the proximity to the user applications.However, the orchestration of cloud-native applications from a data center cloud to edge cloud locations (nodes) is challenging due to the large geographical distribution of edge cloud nodes and the typically heterogeneous access network characteristics.In addition, the resource allocation management of the edge cloud infrastructure so as to ensure reliability and remote monitoring of platform and network resources is challenging and requires extensive future research.For instance, a workload that needs to be instantiated on an edge cloud node has typically a smaller set of available choices for specialized hardware and platform components compared to the large set of available choices in a central data center [72,73].As a result, cloud applications may need specific adaptations to execute efficiently on the smaller set of edge cloud node hardware and platform components.Future research needs to develop and evaluate such adaptation mechanisms so as to provide flexible efficient hardware and software supported computing

Ubiquitous flexible cloud computing
both in large-resource data center clouds as well as in edge clouds with restricted sets of available hardware and platform components.
4. Ubi-Cloud: computing at the network edge 4.1 Background and review of existing approaches This section focuses on the ubiquitous nature of cloud computing at the network edge, i.e. in the space between the end-users and the backbone of the Internet.Since a large proportion of the Internet end-users are connected via wireless links to the Internet, we consider wireless networks as a typical first-hop toward the backbone of the Internet.Roughly speaking, wireless networks, such as the common fourth and fifth generation wireless systems (4G, 5G), consist of a wireless fronthaul that connects end-users via wireless communication to a radio node, e.g. a cellular base station.The radio node can be connected with a wide variety of (wireless or wired) technologies via a gateway over the so-called backhaul to the core network, e.g. the enhanced packet core (EPC) in 4G systems and the 5G packet core (5GC) in 5G systems.The core network, in turn, connects to the Internet at large.Recent research has examined the resource allocations across these different stages (layers) of wireless systems, i.e. the allocation of computation and communication resources to the radio, gateway and core network nodes, as well as to intermediate switching and gateway nodes that relay and process the traffic along the wireless end-user to Internet-atlarge path.In particular, recent studies have explored the benefits of employing the softwaredefined networking (SDN) paradigm, which features separate control and data planes, i.e. the control is logically separated from the plane that transports and processes the actual data packets [80][81][82].The studies found at the judicious sharing of the computation resources along the backhaul path can reduce the peak demands for computational resources in so-called multi-access edge computing (MEC, aka.mobile edge computing) nodes [83][84][85][86].
A critical aspect of ubiquitous cloud computing services is to ensure the integrity of the data transmitted over wireless channels, which may drop or corrupt data packets [87].Recent research has developed network coding techniques that invest computational complexity in order to enable the recovery of dropped or corrupted data packets without complicated synchronization or signaling.The so-called random linear network coding (RLNC) solves a matrix inversion and multiplication problem to recover the data packets [88][89][90][91][92].The computational challenges of RLNC can be addressed with efficient computation strategies on multicore processors [93,94], or through innovative coding strategies that reduce the computing demands through sparse coding structures [95,96].
Fiber-wireless (FiWi) access networks combine the high capacity, scalability and reliability of optical fiber networks with the flexibility and ubiquity of wireless networks to provide broadband services for mobile users as well as fixed subscribers [97].The concept of integrating cloud computing and edge computing into the backhaul network of wireless access networks has been studied [68,97].Results show that integrating cloud and edge computing into backhaul networks is a promising solution for 5G to provide ultra-low latency and ultra-high bandwidth at the edge of the networks [98].

Future research and development directions
The increasing trends to ever more demanding computation applications on untethered enddevices pose a wide range of challenging problems for future research and development along the Ubi-Cloud dimension and specifically toward the goal of ubiquitous distributed cloud computing that is highly responsive to user-demands, yet dispersed over the layers (stages) of the wireless network systems.The distributed nature of the computing units and the signaling delays between the units make task and resource allocation highly challenging.
Typically, classical centralized allocation algorithms are too slow to adapt to highly dynamic task load variations.A promising direction is therefore to allow local regions some autonomy for fast-paced decisions and to coordinate with a central controller over longer time horizons [99].Federated learning, which exchanges limited learning parameter sets among multiple distributed agents that apply local learning and decision making to optimize allocations, can be one potential avenue for addressing this challenging problem [100][101][102][103].More generally, the integration of machine learning techniques with ubiquitous cloud computing at the network edge [104][105][106][107][108] presents new workloads for Ubi-Cloud infrastructures, but also novel mechanisms for optimizing the provisioning and operation of such Ubi-Clouds.Both these workload characteristics and the optimization mechanisms need to be thoroughly researched.
An emerging communications paradigm that is well aligned with the private clouds in Section 2 is the paradigm of 5G campus networks [109][110][111][112]. Conventionally, cellular wireless networks are connected to the public Internet via gateway nodes.In contrast, campus networks operate in complete isolation from the public Internet and are therefore well-suited for scenarios that require all communications and data to remain strictly on-site.5G campus networks can operate in a so-called standalone (SA) mode that obviates the need to operate a legacy 4G long-term evolution (LTE) network (for the control plane) in conjunction with a 5G network; rather in the SA mode, control and data proceed from a 5G wireless end-device via a 5G new radio (NR) base station to a 5G packet core (5GC).Future research needs to examine the efficient inter-operation between a private cloud (based for instance on OpenStack) and 5G campus networks.Depending on the campus layout, computing nodes may be distributed at the locations of the 5G NR base stations or throughout the network infrastructure that connects the 5G NR base stations with the 5GC.Importantly, the 5GC processing is based on cloud-native microservices that can be flexibly processed in cloud computing units [113,114].
A related research challenge is to efficiently allocate cloud computing resources along the continuum from central data centers to the computing resources in the end-devices [115,116] to efficiently support specific highly demanding applications.For instance, wireless sensor networks collect vast amounts of sensing data, while the relevant data that are extracted from the sensed data stream is typically very small in size.Through judicious placement of computing nodes along the network paths that collect the data, the transmitted data could potentially be significantly reduced [117][118][119].Similarly, the management of green energy supplies [120] and the integration of cloud computing with the management of electric vehicles and their charging stations pose novel challenges for ubiquitous cloud computing [121][122][123].
The Ubi-Cloud should accommodate highly diverse network access methods spanning form terrestrial wireless to wired optical to non-terrestrial (e.g.satellite) connectivity.Each network access method has different characteristics that directly impact the cloud computing.For instance, non-terrestrial networks involving satellite connectivity have long delays due to the signal propagation between ground stations and low-orbit satellites.Advanced communication technologies for the Ubi-Cloud include millimeter waves in 5G, Terra hertz waves in 6G, and free space optical communication, which enable very highbandwidth and low-latency links for large-scale data transactions.However, these links require precise tuning and calibration of transmission and reception radio units to maintain a strict line-of-sight (LoS), as well as stable and static transceivers.These large bandwidth links are not only used for wireless access but also for backhaul connectivity, which is referred to as integrated access backhaul (IAB) technology.Cloud computing in central data centers that are reached over unstable backhaul links can cause reliability issues, and network protocols should be adapted for the variable link characteristics.One future research direction is to devise a hybrid data center cloud-edge cloud computing method: during stable backhaul link operation, the computing is conducted in a central data center; however, during periods of unstable backhaul link operation, the computing is temporarily conducted in edge cloud nodes.Thus, temporary backhaul link outages can be tolerated by the hybrid data center cloud-edge cloud computing by designing the applications and the cloud computing orchestration such that end-applications can still interact with the edge cloud applications (which are reached over the still functioning wireless fronthaul) when the backhaul connectivity to the data center cloud is compromised.
In-network computing enables edge cloud computing on network nodes, such as switches, routers, and gateways.Traditional in-network computing includes caching, storage, and network filtering.Advanced in-network computing applications may involve machine learning techniques to detect traffic characteristics, e.g. for intrusion detection, to identify abnormal traffic patterns, for deep packet inspection of the data packet payload, as well as for dynamic encryption and decryption.In-network computing reduces the overall computing required at the edge cloud nodes and in the data center cloud nodes, e.g. if encrypted flows are decrypted at the gateway feeding into the last communication hop before the destination node, then decryption can be avoided at the destination node.Thus, large-scale cloud and edge applications could be separated into small functional units.Some small functional units can be executed on in-network nodes, thereby reducing the load on the edge and data center cloud nodes.Importantly, in-network computing could execute some small functional units with dedicated application-specific integrated circuit (ASIC) hardware accelerators (essentially as a form of the Flex-Cloud hardware processing principle, see Section 3) at line-rates, and thus reduce the overall end-to-end data processing latency (compared to software based execution at an edge or data center cloud node without an ASIC accelerator).Future research needs to thoroughly examine the trade-offs and operational mechanisms, e.g. the orchestration mechanisms, for in-network computing versus the computing at edge and data center cloud nodes.
Decentralized energy trading based on blockchain and distributed ledger technology (DLT) is an emerging area and has been studied in the area of distributed cloud computing.However, blockchain and DLT have not yet been widely studied in the context of the energy sector.Generally, blockchains are not suitable for handling massive computations, nor for running consensus algorithms; also, blockchains consume massive amounts of energy.Twotier cloud computing [68,98] could potentially provide an avenue for efficiently trading decentralized energy and needs to be examined in detail in future research.
Aside from these technological challenges, edge cloud computing poses substantial economic and public policy challenges.Depending on national policies and regulations, the development, operation and management of edge cloud computing infrastructures may be the responsibility of cloud computing providers, telecommunications infrastructure operators, or separate commercial or governmental organizations.Reliable edge cloud computing infrastructures require proper investments and revenue sharing to be economically viable and these economic and public policy aspects need to be thoroughly examined in future research.
5. Ubi-Cloud: computing for mobile users 5.1 Background and review of existing approaches This section focuses on the Ubi-Cloud aspect of seamlessly supporting mobile end-users with cloud computing services.Low-latency application computing for mobile users typically requires that the compute processes are migrated among the distributed computing nodes in the edge cloud computing infrastructure to follow and stay in close physical vicinity of the mobile users [124].A wide variety of modern computing applications are specifically geared toward mobile users [125], e.g.mobile crowd sensing and object detection [126,127], as well as control and management information systems for a wide variety of industrial, cyber-physical, and vehicle traffic systems [128][129][130][131].
The migration of complete containers hosting the computing applications is highly demanding and typically only realistic with some hardware acceleration [132].Therefore, recent research has focused on developing techniques that only transfer the necessary state information of the computing applications [124,133].

Future research and development directions
The localization of end-devices through wireless technologies, such as 5G, has been gaining interest to deliver location-based services.Data center and edge cloud applications can effectively deliver computing services to end-devices based on location-aware computing.Mobile end devices often change their location forcing the location-based services to adapt to the location changes.State-of-the-art localization techniques not only provide the current location of an end-device, but also help to predict the user movements, and perform necessary service modifications to serve the applications on the end-device before the location actually changes.Future research needs to rigorously examine and refine these advanced techniques of location estimation and tracking of end-devices to ensure the reliable low-latency service delivery through the efficient transfer of the necessary state information to the appropriate edge cloud computing nodes and by effectively adapting the location-based services based on the predicted future location.

Conclusions and outlook
We have introduced the Ubi-Flex-Cloud concept consisting of the two dimensions of cloud computing research and development focused on (i) the flexible scaling of the cloud computing capabilities and features, and (ii) the ubiquity of the cloud computing services.We have outlined topics for future research and development to address critical challenges along the Flex-Cloud and Ubi-Cloud dimensions.
An important overarching future research challenge is to make the various individual research advances along the Flex-Cloud and Ubi-Cloud dimensions compatible with each other.More specifically, an important overarching research imperative is to develop, evaluate and refine integration strategies that unify the various Flex-Cloud and Ubi-Cloud advances into cohesively integrated functioning Ubi-Flex-Cloud systems that achieve the dual goals of flexibility and ubiquity.An important additional overarching future research direction is to explore and evaluate optimization mechanisms for the Ubi-Flex-Cloud.Recent research has employed several strategies, including simulations [134], decision trees [135,136], as well as nature-inspired strategies [137].For the complex configuration optimizations in Ubi-Flex-Cloud settings, nature-inspired heuristics may be promising due to their simplicity and wide adaptability.Several recent nature-inspired approaches, e.g.[138][139][140][141][142][143], could be explored for configuring Ubi-Flex-Clouds in future research, possibly in hybrid approaches with other strategies, such as decision trees.