Quality-time-complexity universal intelligence measurement

Jing Liu (Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China)
Zhiwen Pan (Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China)
Jingce Xu (Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China)
Bing Liang (Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China)
Yiqiang Chen (Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China)
Wen Ji (Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China)

International Journal of Crowd Science

ISSN: 2398-7294

Article publication date: 16 October 2018

Issue publication date: 29 November 2018

1193

Abstract

Purpose

With the development of machine learning techniques, the artificial intelligence systems such as crowd networks are becoming more autonomous and smart. Therefore, there is a growing demand for developing a universal intelligence measurement so that the intelligence of artificial intelligence systems can be evaluated. This paper aims to propose a more formalized and accurate machine intelligence measurement method.

Design/methodology/approach

This paper proposes a quality–time–complexity universal intelligence measurement method to measure the intelligence of agents.

Findings

By observing the interaction process between the agent and the environment, we abstract three major factors for intelligence measure as quality, time and complexity of environment.

Originality/value

This paper proposes a calculable universal intelligent measure method through considering more than two factors and the correlations between factors which are involved in an intelligent measurement.

Keywords

Citation

Liu, J., Pan, Z., Xu, J., Liang, B., Chen, Y. and Ji, W. (2018), "Quality-time-complexity universal intelligence measurement", International Journal of Crowd Science, Vol. 2 No. 2, pp. 99-107. https://doi.org/10.1108/IJCS-04-2018-0007

Publisher

:

Emerald Publishing Limited

Copyright © 2018, Jing Liu, Zhiwen Pan, Jingce Xu, Bing Liang, Yiqiang Chen and Wen Ji.

License

Published in International Journal of Crowd Science. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

1.1 Background and related work

With the development of machine learning techniques, the artificial intelligence systems such as crowed networks are becoming more autonomous and smart. Therefore, to evaluate the intelligence of artificial intelligence systems, the universal intelligence measurement is needed. The current intelligence measurement methods can be classified as the human IQ test and the measurement of machine intelligence. IQ test is mainly through people’s perception of knowledge, text and graphics and understanding to test the intelligence of individuals. The machine intelligence can be measured based on human discrimination, problem benchmark, task response theory estimation and algorithmic information theory (Hernndez-Orallo, 2014; Solomonoff, 2009).

In a crowd network, a number of intelligent agents are able to collaborate with each other to finish a certain kind of sophisticated task (Prpic and Prashant, 2016). How to allocate the tasks to the agents in an optimized manner is the primary concern of a crowd network. As agents obtain different abilities (e.g. profession and reliability), the optimized task allocation should be performed on the basis of the evaluation of agents abilities. However, the agents are inherently heterogeneous for they operate within a hybrid-space including information space, physical space and awareness space and such hybrid-space varies with profession and tasks of corresponding agents. Therefore, it is not feasible to evaluate the ability of agent in a comprehensive manner.

Performing intelligence measurement on agents is one of the feasible ways to evaluate the ability of agents partially. The research of intelligent measurement can be dated back to 1950 when Turing test was proposed by Alan M. Turing (Turing, 1950). In recent years, a number of intelligent measurement methods have been proposed by Oppy and David (2003); Longo (2009); Mahoney (1999); Gibson (1998); Masum et al. (2002); Alvarado et al. (2001) and Smith (2006). However, according to the results of these papers, all the proposed methods have the following drawbacks:

  • None of these methods are comprehensive enough to make the measurements by considering more than two factors including reward quality, timeliness, complexity of the environment, etc.

  • Most of these methods (Oppy and David, 2003; Mahoney, 1999; Masum et al., 2002; Alvarado et al., 2001; Smith, 2006) do not evaluate the correlations between factors that are involved in an intelligent measurement. Hence, the effectiveness of the selected factors cannot be proved.

1.2 Summary of content and contributions

In this paper, we propose an intelligent measure approach for intelligent machines such as the agents in crowd network. We name the approach as QTC (quality–time–complexity) intelligent measure approach, as it can perform intelligent measurement by considering three factors: test complexity, rewards quality and timeliness. We proved that there are correlations between the reward quality and the two other factors. The intelligence of an intelligent agent is quantified through calculating the expected accumulative reward quality of the agent.

The rest of the paper is organized as follows. In Section 2, we briefly introduce the agent–environment framework for conducting the intelligent test and then introduce the three factors for intelligence measure. In Section 3, the correlations between the reward quality and the two other factors are evaluated. Then the QTC intelligent measure approach is introduced in detail. In Section 4, we prove the effectiveness of our approach by implementing a famous intelligent measure test. We conclude the paper in Section 5.

2. Agent–environment framework for conducting intelligent test

There are two steps for measuring the intelligence of an agent. The first step is conducting intelligent test on the agent so that the outcome of the test can be collected for further analysis. The second step is using intelligent measure approach to analyze the information collected from the intelligent test. In this paper, we conduct intelligent tests based on a widely accepted agent–environment framework. The detail of agent–environment framework and its implementation is introduced in this section. Agent–environment framework is a widely accepted framework that provides a guideline to conduct the intelligent test. As shown in Figure 1, there are three components in this framework: an agent, an environment and a goal (Legg and Hutter, 2006). Agent is an intelligent entity who is taking the test. The goal is the task assigned to the agent during the test. The goal of a test is predefined by the test designer and should be informed to the agent before the test. The environment is a space to control the agent; it can provide rewards to the agent based on the agent’s actions. During the test, the agents interact with a dynamic environment to maximize the predefined reward. In particular, the agent can send an action signal to the environment and receive a reward corresponded to the current action from the environment. Such test can be regarded as an interactive process between agent and environment and certain information can be collected by observing the process.

3. Quality–time–complexity intelligence measuring model

In this section, we first analyze the correlations between the reward quality and the two other factors. And next, we will introduce our QTC intelligent measure model in detail.

3.1 Major factors for measuring intelligence

By observing the interaction process between the agent and the environment, we abstract three major factors that determine the performance of agent during the intelligent test as follows:

  1. Reward is a sequence of the reward, which is derived based on the actions taken by the agent, and it is quantified by calculating the Expected Accumulated Reward (EAR) of the reward sequence.

  2. Time is the timestamp of the rewards, which can represent the timeliness of the agent’s actions.

  3. Environment is the complexity of the test environment, which can be computed and these environments can adjust based on evaluating the agents’ actions.

To evaluate the correlation among the three factors, we conducted two experiments with seven agents involved. In the first experiment, we performed the same intelligent test on four agents. During the tests, we observe that the variation of EAR was obtained by the tested agent by progressively increasing the complexity of environment. The results of the first experiment (Figure 2) indicate that although the EARs of the four agents change with different patterns when the complexity of environment increases, all of them converge when the complexity of environment is above 21. In the second experiment, we performed the same intelligent test on three agents. The third agent in this experiment configured invokes random actions. The result (Figure 3) shows that the three EARs all increase with time. Moreover, the EARs of the first two agents converge as numbers of interactions increase. Based on the results of the two experiments, we model the correlations of the three factors as a diagram shown in Figure 4. As shown in the figure, time and complexity of environment both correlate with the Reward factor, as the EAR converges when time and complexity of environment increase to a certain threshold.

After analyzing the relationship between them, our next question is how to calculate the EAR.

3.2 The reward for each interaction

As the goal of intelligent measure is to calculate the value of the reward, our first task is to calculate the reward for each interaction. According to the intelligent test designed by Legg and Hutter (2006), a complete interaction between the agent and the environment include two steps: the agent sends an action to the environment and the environment evaluates the action and returns a reward to the agent. For instance, in the Turing test, a complete interaction includes a question asked by the agent and an answer responded by a human.

In an intelligent test where the finite number of interactions occurs within a finite time period, we define the reward Ri of interaction i as:

(1) Ri(t)=(1+1mt)t
where m is the complexity of environment and t is the time when the action i is invoked. This equation is designed according to the trend shown in Figures 2 and 3.

When duration of an intelligent test is infinite long so that t, the limit value of Ri is:

(2) limtRi=limt(1+1mt)t=e1m
The result in equation (1) is a constant when conforms the convergences shown in Figure 2. Hence, such result proves the correctness of equation (1).

The complexity of environment m can be calculated according to the algorithm information theory by using Levins Kt complexity (Li et al., 2008; Levin, 1973) as follows:

(3) m(p,π)=min{l(p)+logtime(π,p)}
where p represents the action and π represents the agent.

By substituting equation (3) into equation (1), the reward Ri of interaction i can be calculated as:

(4) Ri(t)=(1+1mpit)t

3.3 Intelligence measuring model

In this paper, we measure the intelligence of agent π by calculating the EAR obtained by agent π within a predefined period t. Hence, the objective of intelligence measure model is to accurately calculate the EAR. According to the intelligence measuring model introduced by Hernndez-Orallo and David (2010), the calculation of EAR can be based on the sum of the average rewards obtained by agent π within a predefined period t (defined as Vμπ). The equation to calculate Vμπ is as follows:

(5) Vμπ:=E(i=1nRi)=1nii=1ni(1+1mpit)t
where ni is the total number of interactions and μ is the identity of environment. Based on equation (5), we derived the EAR from Legg and Joel’s study (2013):
(6) ϒ:=μE2k(μ)Vμπ
By substituting equation (5) into equation (6), we can obtain the value of EAR as:
(7) ϒ:=1niμE2k(μ)i=1ni(1+1mpit)t
where the environment μ belongs to the environment set E which includes all computable reward bounded environments, and K(·) is the Kolmogorov complexity.

By combining equations (1)-(7), we can eventually propose our intelligence measure model as:

(8) ϒ(t;θ):=1niμE2k(μ)i=1ni(1+1mpit)t
(9) s.t.t>t0
(10) mpi,niN+
where θ=(μ, π)T is the parameter of EAR and μ is the identity of environment.

4. The results analysis of the model

In this section, we implement our proposed intelligent model and then conduct an experiment to evaluate the performance of our model.

4.1 The algorithm of quality–time–complexity universal intelligence measurement

The implementation of our proposed model is described by the following pseudocode:

Algorithm 1 Universal intelligence test

Input: t (the time of the interaction), p (interactive behavior)

Output: a real number (the rewards of the interaction between the agent and the environment)

1: Calculate complexity of environment m based on (2).

2: Calculate the reward for action Ri based on (3).

3: Calculate the expected sum of the rewards Vμπ based on (4).

4: Calculate the Expected accumulated reward ϒ based on (6).

5: return ϒ(t; θ)

Based on the pseudocode introduced above, we performed a simulation to visualize the correlation between the Expected Cumulated Reward, time and the complexity of environment.

According to the simulation result shown in Figure 5, it can be seen that the Expected Cumulated Reward increases with time, and decreases significantly as the complexity of the environment increases.

4.2 Experimental analysis

In this section, an example of implementing the proposed intelligent measure is introduced in detail. Consider a test setting where a chimpanzee (the agent) can press one of the three buttons (A={B1,B2,B3}). Rewards can be giving the agent either a banana or nothing (R={0, 1}). The observation set is derived from an environment where a ball must be put into one of the three cells (O={C1, C2, C3}). We start the test by giving a banana to the chimpanzee, which indicates that the first reward is 1. The observations are randomly generated with a uniform distribution with respect to O so that the rewards are determined accordingly. The behavior patterns of the agents are designed as follows.

The first chimpanzee π1 is much more likely to press button B1, i.e. π1(B1|X) for all sequences X. Consequently, the performance of π1 in this test is:

(11) E(Vμπ1)=Eni(k=1niRkμ,πni)=24limninini+24limni0ni=12

The second chimpanzee (π2) behaves randomly. Hence, the performance of π2 is:

(12) E(Vμπ2)=Eni(k=1niRkμ,π2ni)=33(24limninini+14limninini+14limninini)=0
By comparing the performance between the two agents, we can conclude that agent π1 is better than agent π2 during this test.

5. Conclusion and future work

Traditional human intelligence and machine intelligence are difficult to be described by the form of intelligence in the current environment, and have great limitations. In this paper, we propose a universal intelligent measure approach: QTC intelligence measure approach. We abstract three major factors for intelligence measure as quality, time and complexity of environment. Correlation of the three factors is estimated by conducting two experiments so that the intelligence measure mode can be designed accordingly. Based on the intelligent measure model, we can quantify the intelligence of an agent by calculating the EAR achieved by the agent during an intelligent test. In future, we plan to design and implement a set of comprehensive experiments to evaluate the performance of our measurement model.

Figures

The interaction between the agent and the environment

Figure 1.

The interaction between the agent and the environment

Expected Accumulated Reward vs the complexity of the environment

Figure 2.

Expected Accumulated Reward vs the complexity of the environment

Expected Accumulated Reward vs the time of interactions

Figure 3.

Expected Accumulated Reward vs the time of interactions

Correlations between the three major factors for intelligent measure

Figure 4.

Correlations between the three major factors for intelligent measure

Relationship between the three major factors of intelligent measurement, where x-axis represents the complexity of environment, y-axis represents the time and z-axis represents the EAR

Figure 5.

Relationship between the three major factors of intelligent measurement, where x-axis represents the complexity of environment, y-axis represents the time and z-axis represents the EAR

References

Alvarado, N., Adams, S. S., Burbeck, S. and Latta, C. (2001), “Project Joshua Blue: design considerations for evolving an emotional mind in a simulated environment”, Proceedings of the 2001 AAAI Fall Symposium Emotional and Intelligent II, The Tangled Knot of Social Cognition.

Gibson, E. (1998), “Linguistic complexity: locality of syntactic dependencies”, Cognition, Vol. 68 No. 1, pp. 1-76.

Hernndez-Orallo, J. (2014), “AI evaluation: past, present and future”, arXiv preprint arXiv:1408.6908.

Legg, S. and Hutter, M. (2006), “A formal measure of machine intelligence”, arXiv preprint cs/0605024.

Levin, L. A. (1973), “Universal sequential search problems”, Problemy Peredachi Informatsii, Vol. 9 No. 3, pp.115-116.

Li, M. and Paul, M. B. and Vitanyi, P.M. (2008), An Introduction to Kolmogorov Complexity and its Applications, Springer-Verlag, New York, p. 3.

Longo, G. (2009). “Turing and the imitation game impossible geometry”, Parsing the Turing test, Springer, Dordrecht. pp. 377-411.

Mahoney, M. V. (1999), “Text compression as a test for artificial intelligence”, AAAI/IAAI: 970.

Masum, H., Christensen, S. and Oppacher, F. (2002), “The turing ratio: metrics for open-ended tasks”, Proceedings of the 4th Annual Conference on Genetic and Evolutionary Computation, Morgan Kaufmann Publishers, pp. 973-980.

Oppy, G. and David, D. (2003), available at: https://seop.illc.uva.nl/entries/turing-test/

Smith, W. D. (2006). “Mathematical definition of intelligence(and consequences)”, Unpublished report

Solomonoff, R. J. (2009), “Algorithmic probability: theory and applications”, Information Theory and Statistical Learning, Springer, Boston, MA. pp. 1-23

Turing, A.M. (1950), “Computing machinery and intelligence”, Mind, Vol. 59 No. 236, pp. 433-460.

Further reading

Legg, S. and Veness, J. (2013), “An approximation of the universal intelligence measure”, Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence, Springer Berlin Heidelberg, Berlin Heidelberg, pp. 236-249.

Prpic, J. and Shukla, P. (1977), “Crowd science: measurements, models, and methods”, IEEE System Sciences (HICSS), 2016 49th Hawaii International Conference on, pp. 4365-4374.

Gurland, J. and Sethuraman, J. (1994), “Measuring universal intelligence: towards an anytime intelligence test”, Artificial Intelligence, Vol. 174, pp. 1508-1539.

Acknowledgements

This work is supported by the National Key Research & Development Plan of China (2017YFB1400100), the National Natural Science Foundation of China (61572466), and the Beijing Natural Science Foundation (4162059).

Corresponding author

Wen Ji can be contacted at: jiwen@ict.ac.cn

About the authors

Jing Liu is currently pursuing MS at the Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS), Beijing, China. Her current research interests include crowd science, intelligence measure, machine learning and pervasive computing.

Zhiwen Pan received BS degree from the Purdue University Calumet, in 2012, and the MS and PhD degrees from the University of Arizona, in 2014 and 2017, respectively. He is currently an Assistant Research Fellow in Research Center for Ubiquitous Computing System, Institute of Computing Technology, Chinese Academy of Science. His current research focuses on cybersecurity, secure critical infrastructures, anomaly detection and context awareness.

Jingce Xu was born in 1994. He received BSc in Software Engineering from Nankai University, China, in 2016. He is currently working toward MSc in Computer Application Technology at the Institute of Computing Technology Chinese Academy of Sciences. His research interests include multimedia communication and networking, video transmission and video QoE optimization.

Bing Liang is currently pursuing PhD at the Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS), Beijing, China. His current research interests include multimedia communication and networking, video transmission, game theory and pervasive computing.

Yiqiang Chen received BSc and MS degrees from the University of Xiangtan, Xiangtan, China, in 1996 and 1999, respectively, and PhD degree from the Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS), Beijing, China, in 2002. In 2004, he was a Visiting Scholar Researcher with the Department of Computer Science, Hong Kong University of Science and Technology (HKUST), Hong Kong. He is currently a Professor and Director with the Pervasive Computing Research Center, ICT, CAS. His research interests include artificial intelligence, pervasive computing and human–computer interface.

Wen Ji received MS and PhD degrees in communication and information systems from Northwestern Polytechnical University, China, in 2003 and 2006, respectively. She is a Professor in the Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS), Beijing, China. From 2014 to 2015, she was a Visiting Scholar at the Department of Electrical Engineering, Princeton University, USA. Her research areas include video communication and networking, video coding, channel coding, information theory, optimization, network economics and ubiquitous computing.

Related articles