People-as-a-Service Dilemma: Humanizing Computing Solutions in High-Efficiency Applications

Next-generation computing solutions, such as cyber-physical systems or Industry 4.0, are focused on increasing efficiency in process execution as much as possible. Removing unproductive delays or keeping infrastructures operating at their total capacity are typical objectives in these future systems. Decoupling infrastructure providers and service providers using Anything-as-a-Service (XaaS) paradigms is one of the most common approaches to address this challenge. However, many real scenarios not only include machines or controllers but also people and workers. In this case, deploying process execution algorithms and XaaS solutions degenerates in a People-as-a-Service scenario, which poses a critical dilemma: Can highly efficient production scenarios guarantee people’s wellbeing? In this paper, we address this problem and propose a new process execution algorithm based on a novel understanding of efficiency. In this case, a humanized efficiency definition combining traditional efficiency ratios and wellbeing indicators is used to allocate tasks and assign them to different existing workers. In order to evaluate the proposed solution, a simulation scenario including social and physical elements was built. Using this scenario, a first experimental validation was carried out.


Introduction
Many innovative computing solutions have been reported in the last 15 years: cyber-physical systems (CPS) [1], edge computing [2], Industry 4.0 [3], and so forth. All of them, nevertheless, share some common characteristics. First, all of them are distributed solutions, where many different physical agents support the execution of high-level services [4]. These agents may be very heterogenous, including resource-constrained controllers, legacy systems, traditional hosts, and even people. Second, they are all service-oriented mechanisms [5]. Usually, these solutions define high-level services through the coordination of low-level agents with very heterogenous behavior, so end users are not aware of how services are finally provided. Third, they are all focused on providing services with the highest possible efficiency [6]. Process and task allocation and execution algorithms are deployed at a high level to ensure services have the lowest cost and highest quality. Typical unproductive factors, such as delays, oversized infrastructures, or defective executions, are avoided and removed, maintaining the workload of physical agents as high as possible in a continuous manner. This purpose is very interesting from an economic and engineering point of view, and it is the basis of new concepts such as digitalization [7] and circular economy [8]. In fact, in order to achieve greater levels of specialization and economic efficiency, traditional businesses have been divided into smaller units, which are much more profitable. Typically, service providers and infrastructure providers have broken down their traditional integrated supply chains and have created different and independent businesses. To increase the profitability of this new approach, Anything-as-a-Service (XaaS) paradigms [9] are usually employed. In a XaaS model, infrastructure providers do not sell or rent physical agents but offer their execution capacity as a service, commonly through the Internet. With this technique, physical infrastructures may operate continuously at their full capacity, with no delays, as different slices (rented by different service providers) may be assembled to reach this objective. Thus, fixed costs (boot procedures, configuration delays, etc.) decrease and economic (energetic, operative, etc.) efficiency increases.
However, as previously stated, physical agents are very heterogeneous and, in particular, people may be involved [10]. In this case, people are put under a XaaS work model, which degenerates (in the end) into a People-as-a-Service approach. People, contrary to engineered solutions, tend to preserve their wellbeing instead of system efficiency. Rest periods, human errors, holidays, regulatory limitations, and so forth, are (from the sociological point of view) the most important aspects to be considered when people are working. Nevertheless, efficient process execution algorithms are not aware of how services are supported or provided, and they may penalize tasks performed by humans due to their low efficiency. As a reaction in a People-as-a-Service scenario, work conditions tend to decrease (as well as people's wellbeing), and workers are treated and managed in a very dehumanizing and alienating manner.
In that way, a dilemma arises: Can highly efficient production scenarios guarantee people's wellbeing? In a trivial approach, people will be removed from processes, but some procedures must be performed by humans or (such as in handmade products) humans are the critical added value. On the other hand, in the most popular current trend, people are forced to behave as machines, that being the ideal of perfection. Nevertheless, the consequences of this unnatural manner of managing people have some critical long-term consequences (depression, unproductivity, etc.) that are now strongly arising. The authors argue in this work that the solution is to adapt process execution algorithms to humanized scenarios through new and innovative mechanisms looking for a balance between efficiency and wellbeing. Therefore, this paper proposes a new humanized process execution algorithm. The algorithm is high level, so it is compatible with any other existing task execution solution, low-level infrastructure, or business. In this work, a new indicator is proposed (named "wellbiciency") that represents a combination of process execution efficiency and people's wellbeing. Using this new indicator as a reference, the proposed algorithm tries to optimize its value dynamically according to the system situation. As a result, the obtained system behavior should preserve both economic profitability and people's wellbeing.
The rest of the paper is organized as follows: Section 2 describes the state of the art on humanized computing, especially process execution solutions. Section 3 presents the proposed technological contribution, including the mathematical formalization of the wellbiciency indicator and the final humanized algorithm. Section 4 describes the experimental validation and its results, which was employed to evaluate the performance of the proposed solution. Section 5 concludes the paper.

State of the Art
Works on humanized computing, although a very relevant pending challenge, are still rare nowadays. Most works on this topic actually focus on innovate manners of managing and performing human-computer interactions (HCIs).
The first time the idea of humanizing computing systems appeared was in approximately 2000. Classic computer theory considers a central process unit connected to a set of peripheral devices, through which users can "ask for" actions that should be performed by the processor [11]. In that way, it could be said that computers and people establish a dialogue at a certain level of abstraction. This traditional manner of interaction with computers is called "explicit interactions", as (at every moment) users are aware of the expected behavior from the computer when they explicitly trigger a task or an action execution. However, in 2000, Albrecht Schmidt proposed a new paradigm called "implicit human-computer interactions" [12]. An implicit interaction is any user action that is not primarily focused on obtaining a response from computers, but to which processing devices respond as they are programmed to understand that stimulus. In this way, Schmidt proposed that processing devices should be aware of an environment's evolution and its inhabitants, collecting information about them though sensors and actuators and obtaining some understanding of events in the physical world. The final objective of this new approach is to humanize computing systems, devices, and solutions. To date, thousands of works have analyzed how to support implicit HCIs. For example, recently, wearable devices have proved to be a valid interface between humans and hosts [13]. Systems using commercial sensors [14], transparent solutions based on super senses [15], and high-tech mechanisms based on, for example, leap motion [16] have been proposed. From a theoretical point of view, other ideas such as people-oriented interfaces [17] have been also reported.
On the other hand, some works have proposed mathematical frameworks to extract hidden information from people and, thus, feed algorithms in CPS, Industry 4.0, or ambient intelligence solutions. Specifically, emotional interfaces [18], where people's emotions are analyzed, have been defined. Other proposals based on brain signals have also been reported [19], and works discussing how to apply psychology to humanize computing [20] and software [21] are also common. Initial applications of physiological theories to Industry 4.0 have also been reported that consider human motivation and Maslow's proposals [22]. A very large group within this area is human task recognition. Many works based on artificial intelligence [23,24] of pattern recognition [25,26] techniques may be found.
Finally, a small group of heterogenous works on humanizing computing have been reported. For example, there are articles about how to humanize process models and definitions [10]. Moreover, self-configuration technologies for humanized systems [27] may also be found.
The proposed solution in this paper belongs to this last group, as it may be integrated with previous humanized technologies to build a real humanized computing scenario.

Wellbiciency: Humanizing Next-Generation Computing Solutions
In this section, we present a new humanized computing solution based on the innovative idea of wellbiciency, which includes a generalized (mean of efficiency) and wellbeing indicators. In the next subsection, we present a mathematical formalization and a practical algorithm for process execution considering this new parameter.

Mathematical Formalization
An application scenario , where a process execution system is deployed and running, may be understood as the group of people (workers) and independent technological domains (1): Each independent technological domain is represented in the process execution system by a set of technical variables (2). With this set , a partition may be defined (3). Two subsets are included in this partition. The first subset, , includes all variables describing the (amount of) valid results obtained from the execution system. The second subset, , includes all variables describing the invested resources to generate the obtained results: If both subsets in the partition are nonempty, then it is possible to define an efficiency function (4) describing the behavior of the technological domain : = , .
As can be seen, this efficiency function depends on the set of technical variables but also on time. In fact, as time passes, technological solutions get older, and for identical configurations and variable values, the global efficiency is lower. These two effects are independent and, so, may be expressed as the product of two different functions (5). Function • is named "formal efficiency", and it represents the efficiency as defined by technological providers (or users) from the state variables . On the other hand, function • is named the "aging function" and behaves as an envelope, modulating the real obtained efficiency according to time: This aging function (see Figure 1) presents two different areas. The first part in the function represents the product life of the technological domain, where aging may be considered negligible (the envelope is closer to the unit). The second part is the aging zone, where system efficiency goes down, even when state parameters are maintained. This zone, typically, follows a rational function (6). In this function, represents the moment in the system lifetime when effective aging starts, and indicates the speed at which the system gets older: as grows, the aging speed also increases.
• = (6) On the other hand, each person is represented in the process execution system by a set of wellbeing indicators (7). Using these indicators and a weight function • , it is possible to obtain a realistic human wellbeing measure, ω (8). The employed weight function may be selected by system managers or physiologists according to their needs and studies, as well as the specific application scenario: However, human wellbeing is not stable and, in general, people's needs grow over time. Thus, in order to maintain a constant level of wellbeing, these needs must be satisfied in a continuous and increasing manner. Many works on human motivation, wellbeing, and behavior, such as Maslow's proposals [22], describe human needs as a stair or pyramid (see Figure 2). After a certain time at the same "level", people's wellbeing starts to diminish. Then, people must be promoted to the next level to update and keep them with the same perception of wellbeing as before.
The previously described realistic human wellbeing measure does not consider people's perceptions and the impact of time. So, we must define a new wellbeing measure, the perceived wellbeing ω , where these effects are included (9). This new measure is calculated from the realistic human wellbeing measure using a mapping function ℎ • , which also considers time. This function may present different mathematical expressions, but usually, it is calculated using numerical algorithms and branched functions (see Algorithm 1). In these numerical functions, it is considered that the human saturation time, , determines when people perceive a decrease in wellbeing, even though their realistic wellbeing has remained constant.

end if end if
Then, the wellbiciency of the application is defined as the generalized (mean of all efficiency functions) and the perceived wellbeing measures defined in the application, for all technological domains and people (10). This mean, also named the Kolmogorov mean, considers a function • called (in this work) the "aggregation function". This function represents the weight and impact of each indicator in the resulting wellbiciency, as well as the relation between efficiency and wellbeing measures: In order to guarantee the existence of inverse function • , the aggregation function must be continuous and injective. Many different functions may be considered. Table 1 shows and describes some examples, indicating the characteristics inherited by the resulting wellbiciency if each one is selected. Once wellbiciency is built, it is important to note that it is a time function . Then, noise, fluctuations, interferences, numerical errors, and so forth, may affect the instantaneous value of this parameter. To remove all these effects, final values of wellbiciency are obtained after a smooth process using a Chebyshev type II filter, Ω (11). Chebyshev type II filters are flat in the passband (so no distortion is introduced in wellbiciency), attenuate variations faster than Hz, and remove all components which vary faster than Hz. and are parameters controlling the attenuation of removed components: the Chebyshev polynomial with Z order = 2 • • − = 1 = . (11)

Proposed Algorithm
In a typical process execution system, where + locations may execute each one of the tasks in the workflow, the number of possible variations to execute the process increases exponentially with the number of tasks, (12). Thus, obtaining the optimum execution scheme is a poorly scalable problem if no additional instrument is employed: .
On the other hand, in order to predict the future wellbiciency of a system, depending on the selected execution scheme, some predictive technologies should be considered.
At this point, we must consider that wellbiciency, as the generalized (mean), meets the central limit theorem (13). Further, it is well known that in Gaussian distributions, the most probable values and the mean value are the same. Thus, for a sufficiently large system, the most probable value for wellbiciency may be calculated as the expected value of the joint distributions of all wellbeing and efficiency indicators (14): However, technological domains and people are totally independent of each other. Thus, joint probability may be decomposed as the product of different unidimensional probabilities, and finally, the global expected value as the addition of several different unidimensal expected values (15): , = 1, . . , ; = 1, … , = At this point, for each technological domain and person, the following information is acquired: A discrete grid is created. In one dimension, the current values of the corresponding indicator are represented. In the other dimension, the number of tasks to be assigned to the agent under study is represented. Each vertex in the grid contains the expected value of the studied indicator in those conditions (see Figure 3). This information may be easily measured before system operation, so system performance is not affected. Using these grids, connected as three-dimensional cubes, it is easy to find the optimal process execution sceme using a dynamic time warping algorithm [26]. The cost to be optimized, of course, is the future wellbiciency, represented by its most probable value and calculated as the aggregated value of all nodes that are crossed by the algorithm. Algorithm 2 describes the resulting mechanism.

Experimental Validation
In order to evaluate the performance of the proposed solution, an experimental validation based on simulation scenarios was performed.
The simulation scenario was built using a cosimulator, combining both the social and network (physical) simulations [28]. Specifically, this simulator was based on two well-known commercial simulators: MASON and NS3. MASON (Multi-Agent Simulator of Neighborhoods) is a fast discrete-event multiagent simulation library core in JAVA. NS3 (Network Simulator 3) is also a discrete-event network simulator for Internet systems. Both simulators were connected through a specific engine.
Simulations were carried out using a Linux system; both MASON and NS3 may be easily deployed using Linux systems. To perform the proposed experiment, we used a 64-bit 1570 Linux Ubuntu 16 operating system, with an Intel i5 processor and 8 GB of RAM.
The simulation scenario consisted of five different technological domains, representing various production systems. One domain was composed of 50 resource-constrained devices (microcontrollers), the second domain was composed of mobile robots, the third domain was built using legacy systems, the fourth domain presented a traceability solution based on RFID (Radio Frequency Identification) tags and readers, and finally, the fifth domain was a domotic solution composed of Raspberry Pi nodes. Moreover, in this scenario, 10 people were simulated. All of them were presumed, in the first experiment, to have stable behavior. Agents representing people in our simulation were provided with a Java algorithm representing the evolution of motivation in humans [22].
Two different simulations were performed during the experiment using this scenario. In the first one, a standard process execution solution [10] was deployed in a Linux Container (supported by LXC technologies), connected to the simulation scenario through a TAP (Test Access Point) node and a ghost node in the NS3 simulator. In the second simulation, the proposed humanizing mechanism was added to the process execution system. Data were collected to analyze the wellbeing of each person and the efficiency of each technological domain in the simulations. Then, these data were processed using MATLAB software to evaluate all indicators in both simulations, as well as the global wellbiciency.
Each simulation represented 24 h of continuous operation, where processes were continuously being received and executed (so we could analyze the results when the application scenario had stable behavior).
In order to remove random effects, each simulation was repeated 12 times, and the final results were obtained as the mean value of all these simulations. Figure 4 shows the obtained results for both simulations and all indicators, people, and technological domains. As can be seen, using the proposed humanized mechanism, the wellbiciency value increased up to 50%, mainly because people's motivation and wellbeing increased in approximately the same manner. On the other hand, efficiency reduced by 25%, but it still retained acceptable values (around 70%). In any case, if the observed decrease in efficiency is not acceptable in certain scenarios, this situation may be easily corrected using a different aggregation function.

Conclusions and Future Work
In this paper, we proposed a new mechanism to humanize next-generation computing solutions for process execution. This proposal addressed the People-as-a-Service dilemma, for scenarios characterized by very high efficiency that do not meet people's requirements for wellbeing.
To address this challenge, we proposed a new parameter called wellbiciency, which includes a generalized (mean of efficiency) and wellbeing indicators. This humanized efficiency definition is used to allocate tasks and assign them to different existing workers and nodes in a more respectful manner, considering economic and wellbeing objectives.
In order to evaluate the proposed solution, a simulation scenario including social and physical elements was built. The results showed that the humanization level grew and people's wellbeing increased up to 50%.
Future works will consider more exhaustive experimental validations and real deployments to validate the proposed mechanism.
Author Contributions: The authors' contributions to this work are as follows: B.B. proposed and developed the paper's idea, R.A. and T.R. contributed to the theoretical formalization and paper redaction, and M.H. implemented algorithms and experimental validation.

Funding:
The research leading to these results received funding from the Ministry of Economy and Competitiveness through the SEMOLA project (TEC2015-68284-R).

Conflicts of Interest:
The authors declare no conflict of interest.