Ontological model for the acoustic management in a smart environment Acoustic ontologicalmodel

Purpose – The Reflective Middleware for Acoustic Management (ReM-AM), based on the Middleware for Cloud Learning Environments (AmICL), aims to improve the interaction between users and agents in a Smart Environment (SE) using acoustic services, in order to consider the unpredictable situations due to the sounds and vibrations. The middleware allows observing, analyzing, modifying and interacting in every state of a SE from the acoustics. This work details an extension of the ReM-AM using the ontology-driven architecture (ODA) paradigm for acoustic management. Design/methodology/approach – This work details an extension of the ReM-AM using the ontology- driven architecture (ODA) paradigm for acoustic management. In this paper are defined the different domains of knowledge required for the management of the sounds in SEs, which are modeled using ontologies. Findings – Thisworkproposesanacousticsandsoundontology,aservice-orientedarchitecture(SOA)ontology, and a data analytics and autonomic computing ontology, which work together. Finally, the paper presents three case studies in the context of smart workplace (SWP), ambient-assisted living (AAL) and Smart Cities (SC). Research limitations/implications – Future works will be based on the development of algorithms for classificationandanalysisofsoundevents,tohelpwithemotionrecognitionnotonlyfromspeechbutalsofromrandomandseparatesoundevents.Also,otherworkswillbeaboutthedefinitionoftheimplementationrequirements,andthedefinitionoftherealcontextmodelingrequirementstodeveloparealprototype. basedontheODAparadigmhasbybeingawareofdifferentcontextsandacquireinformationofeach,usingthisinformationtoadaptitselftotheenvironmentandimproveitusingtheautonomiccycles.Toachievethis,themiddlewareintegratestheclassesandrelationsinitsontologiesnaturallyintheautonomiccycles. Originality/value – Themaincontributionofthisworkisthedescriptionoftheontologiesrequiredforfuture works about acoustic management in SE, considering that what has been studied by other works is the utilization of ontologies for sound event recognition but not have been expanded like knowledge source in an SE middleware. Specifically, this paper presents the theoretical framework of this work composed of the AmICL middleware, ReM-AM middleware and the ODA paradigm.


Introduction
In artificial intelligence, the main process related to sound management is linked to voice recognition and voice commands. It is important to also consider the wide range of acoustic events that occur in a smart environment (SE). Thus, sound management in an SE is complex and requires new approaches.
As explained in Ref. [1], the Reflective Middleware for Acoustic Management (ReM-AM) extends AmICL [2,3] to change the behavior of any SE from the acoustic perspective, implementing different components for sound management. In ReM-AM [4], there are three components: collecting audio Data (CAD) to create metadata and categorize sound information, interaction of system-user-agent (ISUA) to identify and analyze the information and determine the source and interactions and the decision-making (DM) to adapt the SE in terms of acoustic requirements. These components help the system to deploy a set of autonomic cycles of data analysis tasks [5,6], which work together to exploit the acoustic in the self-organization of the SE. The previous work [7] introduced a state of the art of acoustics and its relationship with SEs, while [4] described the ReM-AM middleware. The autonomic cycles supported by ReM-AM are explained in Ref. [8], and a first attempt to implement the ontology-driven architecture (ODA) for acoustic management is analyzed in Ref. [1].
This work explains how the ODA paradigm [9] is integrated with ReM-AM through different models, such as the computation-independent model (CIM) focused on acoustic management (acoustic and sound ontologies), and the platform-independent model (PIM) focused on how ReM-AM is deployed on the computational platform (service-oriented architecture [SOA], data analytics and autonomic computing ontologies).
In this regard, some works have been developed about ontologies related to sound and acoustics. In the work [10], audio event recognition is considered. Particularly, the authors describe a data set of audio events obtained using a hierarchical ontology to stimulate the development of audio event recognizers. Also, in the work [11], a two-ontology-based neural network architecture for sound event classification is proposed, showing an improvement by using ontological information in classification tasks. In Ref. [12], the authors present an ontology-aware neural network for single-label and multilabel audio event classification. In Ref. [13], the author explains what sound ontology is in the context of analytic philosophy and the approaches that it encompasses. The paper's intention is to relate both the depiction of sound and the auditory phenomena in the phenomenological tradition. In this case, analytic philosophy inquiries the nature of sounds, their location, auditory experience or the audible qualities based on an ontology approach.
Also, various works show the utilization of ontologies in SE. For example, Ref. [14] describes an ontologycalled SmartEnvto represent SEs which describes different aspects, including physical and conceptual characteristics of an SE. They use the ontology design pattern paradigm to define modularly the ontology. Ning et al. [15] propose an ontology for gathering sensor data with semantic meaning in smart homes. Particularly, this ontology allows the modelization of the context and the activities, using spatiotemporal and user profile information. Also, the work [16] discusses the use of ontologies to form an integrated platform for smart healthcare services, while the authors of [17] introduce ontologies to model dynamic context. In this work, however, the main focus is to combine the ontologies for the CIM and PIM layers but not to generate a new one, as proposed in Ref. [18] by Stoilos et al., where independently developed ontologies are integrated, with experimental evaluations to analyze the coherence among the integrated ontologies. The existing ontologies help the acoustic management in an SE. Particularly, the mapping approach was used in Ref. [19] for the integration process of the ontologies used following the ODA paradigm, which establishes identity relationships between entities of the O1 ontology and the O2 ontology through their common properties (e.g. subclass_of). The result of the mapping is reflected in a binding ontology that contains the equivalent entities, through which the two ontologies are connected.
On the other hand, to guide up the ontology development/engineering process, have been proposed methodologies that support the creation or integration of ontologies. These methodologies allow reusing existing knowledge resources (such as ontologies, thesauri, lexicons, etc.). Examples of these methodologies are Methontology and NeOn. Particularly, NeOn has been created for building ontology networks by reusing resources, using alignments and considering the evolution of the ontologies [20]. This paper used the ODA paradigm that has an implicit methodology for the design/integration of ontologies.
The main contribution of this work is the definition of an ontological framework for acoustic management in SEs. In particular, this work defines: (1) The different ontologies required for the implementation of an acoustic management system (2) The process of integration of the ontologies based on the methodological framework implicit in the ODA paradigm (3) The integration of the ontological framework with autonomous cycles of data analysis tasks, which allows the acoustic management of SEs.
Previous works studied the implementation of ontologies for sound event recognition, among other specific tasks, but none of them developed an acoustic ontological framework for acoustic management in an SE using a middleware. Specifically, this paper introduces the theoretical framework of this work composed of the AmICL middleware, ReM-AM middleware and the ODA paradigm. Then, the definition of the AmICL-ReM-AM middleware based on the ODA paradigm is presented. The utilization of this ODA-based approach is shown in several case studies (SWP, AAL and SC) to analyze its behavior in these SEs. Finally, there is a comparison with other works, followed by the conclusions that summarize the contributions and future works.

Theoretical framework
To understand the relation between AmICL, ReM-AM and ODA, it is important to describe each one in this section.

AmICL and ReM-AM
AmICL proposes a middleware that uses digital resources to improve the learning process, combining educational services from the cloud with multiagent systems (MAS). The architecture of AmICL is described in detail in Refs. [2,3], and it was used as a base for the development of the ReM-AM architecture described in Ref. [4], where the layers are the same, except for the Audio Management Layer (AML) that was added for ReM-AM.
The AML has three components: CAD that characterizes the sound events by exploiting an auditory vocabulary to categorize them, by creating the metadata about it, and by defining the properties of each sound event. ISUA offers sound pattern recognition and smart analysis to identify the sources using algorithms provided by the cloud and to determine the interactions taking place in the SE. DM will revise the information obtained and the options available to acoustically adequate the SE and optimize it. Using this structure, ReM-AM has three main autonomic cycles, which have been defined using the concept of "autonomic cycles of data analysis tasks" that defines three phases in the cycle: observation, analysis and decision-making [5,6]. These autonomic cycles are: (1) The general acoustic management (GAM) follows the ABC paradigm [21], which consists of three processes: absorbing acoustic waves that create reverberation, blocking the dispersion of the waves to focus on the right direction and covering noises using noise-canceling processes.
(2) The intelligent sound analysis (ISA) aims at identifying the acoustic features of the context to discover the SE (an intelligent concert hall, a smart classroom, an ambient-assisted living [AAL], among others), and which possible tasks can be executed. It uses acoustic sensors in the SE, and the location of microphones will depend on the acoustic features of the space.
(3) The artificial sound perception (ASP) has the goal of offering an artificial perception of an SE. This autonomic cycle is based on work [12], but inversely. This work presents a virtual acoustic reality, but instead of generating an artificial acoustic environment, ReM-AM will offer an artificial perception of an SE.
These autonomic cycles, concerning the ODA paradigm, will be described in the next sections.

ODA paradigm
In informatics, an ontology is the specification of a conceptualization in a machine-readable way [9], defining a common vocabulary for data transmission in a determined domain. Its components are: class or type, relations, instances, individuals, properties and axioms. An ODA is based on a three-layer approach [9]: (1) Computational-independent model (CIM) with a focus on functional requirements, nonfunctional properties, business rules or goals, data processing and the system domain in general. This describes the system from a computational-independent perspective, considering the domain and contextual information.
(2) Platform-independent model (PIM), which takes the domain information of the specifications from CIM and adds details for its real computational implementation. It defines the system in terms of a computational abstraction or a technology-neutral virtual machine.
(3) Platform-specific model (PSM), which combines the PIM with features and adds details for the development of the computational platform.
In ReM-AM, the focus will be on outlining the CIM layer and the PIM layer.
3. ODA paradigm for ReM-AM 3.1 Ontologies for ReM-AM based on the ODA paradigm This section describes the ontological framework of ReM-AM that will be built following the ODA paradigm, which has an implicit methodology for the design/integration of ontologies. ReM-AM considered a sound ontology and an acoustic ontology for the CIM layer as domain information to integrate the terminology and aspects required for acoustic management. The PIM layer contains the ontologies related to the computational implementation of the ReM-AM middleware, such as SOA capabilities, data analytic task description and autonomic computing.
The previous work [1] explains that the CIM layer in the ReM-AM middleware is composed by the sound ontology from Ref. [21], and the acoustic vocabulary developed in Ref. [22] (see Figure 1a). They have two description hierarchies: part-of or has, based on the inclusion relationship; and is-a, based on the integration relationship. The sound ontology proposes a terminology for sound representation and for integration of various sound-stream segregation systems. It has sound classes, definitions of individual sound attributes and their relationships. It uses just the part-of hierarchy.
In the acoustic vocabulary, there are four main classes: sensor, which allows controlling the acoustic data compiling; measurement, which defines the properties of the sensor measurements; time, which allows representing the intervals and instants when the measurements are made; and location, which allows representing the geographical positioning.
The PIM layer (see Figure 1b) presents the ontologies that are linked to the middleware: SOA, data analytics and autonomic computing aspects. The SOA ontology described in Ref. [19] explains the web services, the languages and standards and the cycle of a service. The data analytic ontology explained in Refs. [23,24] describes the data itself and the analysis methods, considering that each data has a set of parameters and criteria that determines the types, tasks, techniques for its further analysis.
Works [6,25] defined autonomic computing ontology as driven entities that define persons or objects, and objects that can be devices or applications. Finally, these entities can be managed by controllers with the capabilities to analyze, plan, act, monitor and control them.
In general, all the elements (classes, properties) of the aforementioned ontologies are used by our ontological approach in its inference processes.

ODA paradigm in the autonomic cycles of ReM-AM
The autonomic cycles of ReM-AM will use the ODA paradigm as follows: (1) GAM: During the observation phase (see Step 1 in Figure 2), it will use the sensor, measurement and location classes of the acoustic ontology (CIM layer) to obtain information about the conditions that can modify the acoustic features in the SE, such as room shapes, wall materials, objects, temperature, air density, etc. Also, the classes of the sound ontology (CIM layer) will be used to classify sound among speech, machinery, musical or natural sound; hence, determining the diverse sources to indicate which sounds should be absorbed, blocked or covered in the next phase (see Step 2 of Figure 2). Finally, during the decision-making phase, the controllers and driven entities from the autonomic computing level (PIM layer) will be used to execute the specific actions, in order to modify the acoustical situation to delete nondesirable sounds or noises (see Step 3 in Figure 2).
(2) ISA: During the observation phase, the measurement class of the acoustic ontology (CIM layer) and the different classes of sound ontology (CIM layer) will be used to identify sound information (see Steps 1-2 in Figure 3), which is crucial to get a more accurate perception of the SE and the tasks that could take place. For the analysis phase, the monitoring class from the data analytics ontology (PIM layer) will be used to detect and discriminate users, setting them apart from the sources to determine the actual use of the SE and the improvements that can be done in terms of sound features (see Step 4 in Figure 3). Finally, the decision-making phase will use the SOA capabilities ontology (PIM layer) to recover the tasks that the analysis identified, such as increasing the absorptive materials adding panels, or the reflective surfaces for better dispersion (see Step 4 in Figure 3).
(3) ASP: During the observation phase, the acoustic ontology (CIM layer) will be used to locate sources helping to hold the focus on the specific sound event of frequency that should be treated (see Step 1 in Figure 4), and sound ontology (CIM layer) will be used to identify features of that event or frequency (see Step 2 in Figure 4). For the analysis phase, the sound ontology (CIM layer) will also be used, but in this case to determine acoustical parameters in order to amplify them (or decrease them, if necessary), and make them hearable if they are over or under human hearing thresholds (see Step 2 in Figure 4). The decision-making will use the SOA capabilities ontology (PIM layer) to determine the services according to the target, which means moving loudspeakers, rotating them or changing their settings to create an immersive sound environment for the user (see Step 3 in Figure 4). Some of these services might need to use the data analytics ontology (PIM layer) for specialized procedures of the sound (see Step 4 in Figure 4).

Application scenarios
To test the ontologies with the autonomic cycles, this paper describes three case studies:

Smart workplace (SWP)
As explained in Ref. [26], an SWP implies the provision of a workplace infrastructure that empowers employees through self-regulation, collaboration and communication, promoting a strong environmental ethic, and sustains organizational agility, by adopting and implementing new technology platforms. In this SE, the autonomic cycle GAM can help to improve the experience of workers in their offices, especially in open-plan offices. According to Ref. [27], irrelevant speech in open-plan offices has been reported as the main distraction, causing performance decrements for workers.
In this case, it is important to identify what is considered as irrelevant speech and what should be enhanced. GAM can obtain the acoustic features from the entire workplace using the sensor, measurement and location classes from the acoustic ontology in the CIM layer (see Step 1 in Figure 5), in order to carry out the sound classification with the sound ontology to differentiate voices from other sound events (see Step 2 in Figure 5).
At this point, the system needs to consider every sound event as a unitwhich means, a door closing, a book falling, steps, laughs, coughs and every other isolated sound that can be generated. Mostly, in a space with 16 workers divided in four groups of four, the system could help with the absorption of the reflections spreading from the other groups; the separations between the tables of each group of four should help with the blockage of the acoustic waves to avoid the transmission to the other workspace; the system can also determine which waves should be covered. It will use the controllers and driven entities classes from the autonomic computing taxonomy of the PIM layer for the decision-making for absorption, blocking and cover (see Step 3 in Figure 5). As a result, the system will absorb the frequencies generating noise by spreading a noise cancellation phase using roof-installed speakers, it will identify the acoustic waves that are being blocked by screen barriers, and it will cover individual sound events by masking them, when possible, using the same speakers for the noise canceling.
Competency questions are user-oriented questions to evaluate an ontology [28]. In other words, these are questions that users would want to have answered by querying the ontology. Through the application of competence questions, it is possible to verify the quality of the ontologies used for this case study: Q1. Which sound elements should be enhanced?
R1. In this case study, our ontological approach can infer "Speech".
Q2. Which elements will be used for the decision-making?
R2. In this case study, our ontological approach can infer "Controllers".

Ambient-assisted living (AAL)
In the context of AAL [29], it is important that the system can provide information about what is happening in the SE. The ISA autonomic cycle will be able to collaborate in the required tasks by proving the acoustic information from the SE. For example, in a care home with  disabled people and elders, a patient in a wheelchair has breathing problems and is often coughing, but at a particular moment, the patient is having an asthma attack. The ISA will use the measurement of the acoustic ontology to get constants in the sound field, such as coughs and the sound of the wheelchair (see Step 1 in Figure 6), and the sound ontology from the CIM layer will be used to obtain sound information to identify speech if the patient is calling for help (see Step 2 in Figure 6). Besides the individual sound events taking place, it is also important to consider that if speech is identified, the emotion recognition using sounds should be taking into account: anger, disgust, fear, happiness, sadness, surprise [30].
From the PIM layer, the system will use the data analytic to do the monitoring of the sound events that alter the continuum in the SE, taking into account the emotional state of the patient, which in this case is fear (see Step 3 in Figure 6). With all this information, the system will use the web services from SOA capabilities ontology for the decision-making, which should send a warning to a person in charge or alert doctors or nurses (see Step 4 in Figure 6).
The competence questions for this case study are: Q1. Which element should be identified at first?
R1. In this case study, our ontological approach can infer "Speech".
Q2. Which services will be used for the decision-making?
R2. In this case study, our ontological approach can infer "Web services".

Smart cities (SC)
According to Ref. [31], in smart cities, it is imperative to exploit the wealth of information and knowledge generated in them by integrating the Information and Communications Technologies (ICT). There are different areas of applications of smart cities; such as government, education, health, buildings, mobility, among others [3,32]. In this case, the focus will be on the environment and the acoustic impact on it, in terms of how acoustic signals could help governments to take actions against natural disasters such as earthquakes. Considering that vibrations before an earthquake can be detected on short notice by special devices, the ASP autonomic cycle will use the location from the CIM layer and the acoustic ontology to locate vibration sources and their distance (see Step 1 in Figure 7), as well as the sound ontology to identify acoustic features and parameters in the vibrations that will allow perceiving sounds that are outside the human threshold (see Step 2 in Figure 7). Interaction diagram for ISA in AAL ACI These tasks of analysis of the sound previously indicated that this cycle would carry out, would allow determining the characteristics of the tremor and, in this way, eventually foresee preventive actions, as well as prepare future actions in the face of the possible natural disaster that is approaching (earthquake). Now, these are very specific, specialized sound processing tasks that should be invoked by the cycle. Thus, the ontologies to infer these needs and, specifically, the analysis tasks and techniques required according to the context. Thus, for specialized topics/tasks, the Web Services of the SOA capabilities ontology from the PIM layer (see Step 3 in Figure 7), and the data analytics ontology from the PIM layer are required to determine the techniques needed to implement actions against events that could cause damagesuch as a warning recommending evacuations of specific areas or indicating the nearest safe place (see Step 4 in Figure 7).
For this case study, the competence questions are: Q1. Which source feature should be identified at first?
R1. In this case study, our ontological approach can infer "Location".
Q2. How can a final recommendation be offered?
R2. In this case study, our ontological approach can infer "Techniques/Data Analytics".

Comparisons with previous works
In order to compare ontologies used in previous works [10][11][12] to this article, there is a set of criteria: (1) they can use specialized tasks for sound analysis, (2) they allow the dynamic integration of audio services, (3) they consider the acoustic information from the context, (4) they can be used in diverse real-life contexts and (5) they can be used in SEs. Table 1 shows the comparison of the works. All these works use specialized tasks to perform the sound analysis required in each project. It is what these works have in common. The works [10,11] show an absence of dynamic integration of audio services to develop their proposal, but work [12] integrates [10] X X [11] X X [12] X X X ReM-AM X X X X X  Table 1. Comparison to other works audio services in its experimental setup. Gemmeke et al. [10] are the only one that takes into account the acoustic information from the context in which it is used, while works [11,12] focus on the audio itself. Works [11,12] describe experimentations considering real-life contexts, while work [10] introduces this idea as future work. None of these works propose any use in SEs.
ReM-AM meets all these criteria, allowing the integration of specialized tasks for sound analysis (PIM layer), proposing a dynamic integration of audio services (CIM and PIM layers), using as much acoustic information from the environment as possible (see Section 4) based on the context-awareness principles of the AmICL middleware [3,33], having as a priority its application in different real-life contexts and SEs (see Section 4).

Conclusions
This work carries out an extension of the ReM-AM middleware based on the ODA paradigm for acoustic management in SEs. The acoustic management and the ReM-AM middleware deployment are integrated by ontologies using the CIM and PIM layers of the ODA paradigm. The CIM layer contains the domain information from a sound ontology and an acoustic ontology; and the PIM layer contains ontologies for the computational implementation of the middleware: SOA Capabilities, Data Analytic, and Autonomic Computing.
In the case studies, it is possible to observe the flexibility that the ReM-AM middleware based on the ODA paradigm has by being aware of different contexts and acquiring information of each one of them, using this information to adapt itself to the environment and improve it using the autonomic cycles. To achieve this, the middleware integrates the classes and relations in its ontologies in the autonomic cycles.
Future works will develop algorithms for classification and analysis of sound events to help with emotion recognition not only from speech but also from random and separate sound events. Also, other works will define the implementation requirements and the real context modeling requirements to develop a real prototype. Some aspects to consider in this future implementation are the sensing mechanisms and their relationship with the elements of the context described in the ontologies of the CIM layer; and the software development mechanisms and deployment platforms and their relationships with the elements described in the ontologies of the PIM layer. Thus, to make acoustic management feasible, there must be a coherence between the SE and the elements described in the ontological framework, but the ontologies are flexible enough to instantiate the elements present in the SE.