Search results
1 – 10 of over 3000Ziming Zhou, Fengnian Zhao and David Hung
Higher energy conversion efficiency of internal combustion engine can be achieved with optimal control of unsteady in-cylinder flow fields inside a direct-injection (DI) engine…
Abstract
Purpose
Higher energy conversion efficiency of internal combustion engine can be achieved with optimal control of unsteady in-cylinder flow fields inside a direct-injection (DI) engine. However, it remains a daunting task to predict the nonlinear and transient in-cylinder flow motion because they are highly complex which change both in space and time. Recently, machine learning methods have demonstrated great promises to infer relatively simple temporal flow field development. This paper aims to feature a physics-guided machine learning approach to realize high accuracy and generalization prediction for complex swirl-induced flow field motions.
Design/methodology/approach
To achieve high-fidelity time-series prediction of unsteady engine flow fields, this work features an automated machine learning framework with the following objectives: (1) The spatiotemporal physical constraint of the flow field structure is transferred to machine learning structure. (2) The ML inputs and targets are efficiently designed that ensure high model convergence with limited sets of experiments. (3) The prediction results are optimized by ensemble learning mechanism within the automated machine learning framework.
Findings
The proposed data-driven framework is proven effective in different time periods and different extent of unsteadiness of the flow dynamics, and the predicted flow fields are highly similar to the target field under various complex flow patterns. Among the described framework designs, the utilization of spatial flow field structure is the featured improvement to the time-series flow field prediction process.
Originality/value
The proposed flow field prediction framework could be generalized to different crank angle periods, cycles and swirl ratio conditions, which could greatly promote real-time flow control and reduce experiments on in-cylinder flow field measurement and diagnostics.
Details
Keywords
Javad Gerami, Mohammad Reza Mozaffari, Peter Wanke and Yong Tan
This study aims to present the cost and revenue efficiency evaluation models in data envelopment analysis in the presence of fuzzy inputs, outputs and their prices that the prices…
Abstract
Purpose
This study aims to present the cost and revenue efficiency evaluation models in data envelopment analysis in the presence of fuzzy inputs, outputs and their prices that the prices are also fuzzy. This study applies the proposed approach in the energy sector of the oil industry.
Design/methodology/approach
This study proposes a value-based technology according to fuzzy input-cost and revenue-output data, and based on this technology, the authors propose an approach to calculate fuzzy cost and revenue efficiency based on a directional distance function approach. These papers incorporated a decision-maker’s (DM) a priori knowledge into the fuzzy cost (revenue) efficiency analysis.
Findings
This study shows that the proposed approach obtains the components of fuzzy numbers corresponding to fuzzy cost efficiency scores in the interval [0, 1] corresponding to each of the decision-making units (DMUs). The models presented in this paper satisfies the most important properties: translation invariance, translation invariance, handle with negative data. The proposed approach obtains the fuzzy efficient targets corresponding to each DMU.
Originality/value
In the proposed approach, by selecting the appropriate direction vector in the model, we can incorporate preference information of the DM in the process of evaluating fuzzy cost or revenue efficiency and this shows the efficiency of the method and the advantages of the proposed model in a fully fuzzy environment.
Details
Keywords
Lars-Erik Gadde and Håkan Håkansson
In today’s business settings, most firms strive to closely integrate their resources and activities with those of their business partners. However, these linkages tend to create…
Abstract
Purpose
In today’s business settings, most firms strive to closely integrate their resources and activities with those of their business partners. However, these linkages tend to create lock-in effects when changes are needed. In such situations, firms need to generate new space for action. The purpose of this paper is twofold: analysis of potential action spaces for restructuring; and examination of how action spaces can be exploited and the consequences accompanying this implementation.
Design/methodology/approach
Network dynamics originate from changes in the network interdependencies. This paper is focused on the role of the three dual connections – actors–activities, actors–resources and activities–resources, identified as network vectors. In the framing of the study, these network vectors are combined with managerial action expressed in terms of networking and network outcome. This framework is then used for the analysis of major restructuring of the car industries in the USA and Europe at the end of the 1900s.
Findings
This study shows that the restructuring of the car industry can be explained by modifications in the three network vectors. Managerial action through changes of the vector features generated new action space contributing to the transition of the automotive network. The key to successful exploitation of action space was interaction – with individual business partners, in triadic constellations, as well as on the network level.
Originality/value
This paper presents a new view of network dynamics by relying on the three network vectors. These concepts were developed in the early 1990s. This far, however, they have been used only to a limited extent.
Details
Keywords
Yijie Zhang, Ling Ma, Ziyi Guo, Tao Li and Fengyuan Zou
Considering only two-dimensional (2D) ease allowance cannot fully reflect the three-dimensional (3D) relationship between the position of clothing and the human body. The purpose…
Abstract
Purpose
Considering only two-dimensional (2D) ease allowance cannot fully reflect the three-dimensional (3D) relationship between the position of clothing and the human body. The purpose of this paper is to propose a method with a 3D space vector and corresponding distance ease to characterize fitting garments and then used to construct personalized clothing for similar shape body.
Design/methodology/approach
Firstly, a 3D scanner was used to obtain mannequin and fitted garment data, and 17 layers of cross-sections of the upper body were extracted. Then, 37 space vectors and corresponding space angles on each cross-section were obtained with the original point. Secondly, the detailed distance ease between the mannequin and garment was constructed due to the difference between garment vectors and body vectors. Thirdly, the distance ease mathematical models were achieved and used to calculate distance ease on a similar shape body. Additionally, the fit garment is constructed, and the garment pattern is altered by the geometric pattern alteration method.
Findings
The results show that 3D space vectors can explain the relationship between body skin and garment surface of the upper body properly. The distance ease is modeled by mathematic expressions and successfully used to make a new garment to fit a similar shape body.
Originality/value
The proposed method of constructing garments based on distance ease and 3D space vectors can create a fitted garment for a similar shape body effectively and accurately. It is useful for the personalized garment design and suitable for the manufacturing process.
Details
Keywords
Mohan Khatri and Jay Prakash Singh
This paper aims to study almost Ricci–Yamabe soliton in the context of certain contact metric manifolds.
Abstract
Purpose
This paper aims to study almost Ricci–Yamabe soliton in the context of certain contact metric manifolds.
Design/methodology/approach
The paper is designed as follows: In Section 3, a complete contact metric manifold with the Reeb vector field ξ as an eigenvector of the Ricci operator admitting almost Ricci–Yamabe soliton is considered. In Section 4, a complete K-contact manifold admits gradient Ricci–Yamabe soliton is studied. Then in Section 5, gradient almost Ricci–Yamabe soliton in non-Sasakian (k, μ)-contact metric manifold is assumed. Moreover, the obtained result is verified by constructing an example.
Findings
We prove that if the metric g admits an almost (α, β)-Ricci–Yamabe soliton with α ≠ 0 and potential vector field collinear with the Reeb vector field ξ on a complete contact metric manifold with the Reeb vector field ξ as an eigenvector of the Ricci operator, then the manifold is compact Einstein Sasakian and the potential vector field is a constant multiple of the Reeb vector field ξ. For the case of complete K-contact, we found that it is isometric to unit sphere S2n+1 and in the case of (k, μ)-contact metric manifold, it is flat in three-dimension and locally isometric to En+1 × Sn(4) in higher dimension.
Originality/value
All results are novel and generalizations of previously obtained results.
Details
Keywords
To run a job guarantee public policy scheme, it is important to know the aspiration level or the reference point of labor, and accordingly, the labor hour and the wage sequence…
Abstract
Purpose
To run a job guarantee public policy scheme, it is important to know the aspiration level or the reference point of labor, and accordingly, the labor hour and the wage sequence are to be prepared. The existing job guarantee schemes consider the same wage rates for all types of jobs. As a result, it is to identify the reference point. The present work aims to propose a job guarantee scheme where different types of jobs have different wage rates. The paper explains the choice problem between labor and leisure at different wage rates and proposes complete computational tools to be incorporated into the job guarantee schemes. The paper also gives a mechanism to prepare the list of jobs and corresponding wage rates by maintaining a balance between labor and leisure, where productive activities measure labor hours and labor welfare measures leisure hours. Lastly, the paper provides the analytical tools to interpret the ex-post data of the job guarantee public policy schemes.
Design/methodology/approach
The paper has been written based on the Coordination Game and its Welfare Implications in the job guarantee public policy schemes.
Findings
The present paper gives an initial work to measure the choice between labor and leisure for the different wage rates practically. This will help in getting the equilibrium strategies, namely, the combination of the labor hour and the wage rate between the policymaker and the labor. This method will help to implement the job guarantee schemes. For example, to run successfully the Basic Income policy, the basic income calculation should give due care; otherwise, there will be a downward trend in the basic income and the welfare of labor will be reduced, because the labor would have to supply excess labor to meet the target income.
Originality/value
This paper derives theories and explains how the equilibrium in this coordination game can be achieved. The paper explains how the policy of the job guarantee schemes can be practiced practically. In the MGNREGA scheme, the public institution declares different categories of jobs with different wage rates. The categories have been classified with respect to the hours required to complete the job. Therefore, the public institution declares different lists or a sequence of pairs of labor hours and wage rates. Moreover, the list is stochastic, because the list can be changed by the inclusion of an offer from the market as well. The labor has to select from the list. The challenge on the part of the public institution is to prepare the list in such a way so that the inclusion of the market offers will not distort the equilibrium of the coordination game. An important method has been proposed here to analyze the ex-post data of job offers so that the preparation of the future sequence of the job offers can be prepared with due care. One objective of the policymaker here is to make a list of job offers in such a way so that the labor supply will be converging to a point and that will not deviate if the wage rate increases further. This objective will make a balance of the distribution of funds between the existing registered labor and the new entrants into the job guarantee schemes.
Details
Keywords
Taposh Kumar Roy and Md Habibullah
Predictive current control (PCC) of three-to-five-phase direct matrix converters (DMCs) is computationally expensive. For this reason, this study aims to consider a reduced number…
Abstract
Purpose
Predictive current control (PCC) of three-to-five-phase direct matrix converters (DMCs) is computationally expensive. For this reason, this study aims to consider a reduced number of switching states of DMC in PCC algorithm to predict the control objectives, such as output current control and input reactive power control.
Design/methodology/approach
The switching sequences which yield the voltage vectors of variable amplitude at a constant frequency in space are considered for the prediction and optimization step of PCC algorithm. For the selected voltage vectors, the phase angles of the output vectors are independent on the phase angles of the input vectors. In a three-to-five-phase DMC, there are 243 valid switching states. Among the switching states, only 91 states are considered using the aforementioned concept of variable amplitude output at a constant frequency. This reduced number of switching states simplifies the computational complexity of MPC based current control of three-to-five-phase DMC.
Findings
The computational complexity of the proposed PCC based DMC is lower than the all 243 vectors based PCC. The current total harmonic distortion, transient current response and input reactive power control for the simplified 91 vector based PCC are similar to the all 243 vectors based PCC.
Originality/value
A reduced number of switching sequence is considered for the prediction and optimization step of PCC algorithm. Hence, PCC algorithm can be sampled at a high frequency in real-time applications. Then, the performance of the PCC will be improved.
Details
Keywords
Xiongming Lai, Yuxin Chen, Yong Zhang and Cheng Wang
The paper proposed a fast procedure for solving the reliability-based robust design optimization (RBRDO) by modifying the RBRDO formulation and transforming it into a series of…
Abstract
Purpose
The paper proposed a fast procedure for solving the reliability-based robust design optimization (RBRDO) by modifying the RBRDO formulation and transforming it into a series of RBRDO subproblems. Then for each subproblem, the objective function, constraint function and reliability index are approximated using Taylor series expansion, and their approximate forms depend on the deterministic design vector rather than the random vector and the uncertain estimation in the inner loop of RBRDO can be avoided. In this way, it can greatly reduce the evaluation number of performance function. Lastly, the trust region method is used to manage the above sequential RBRDO subproblems for convergence.
Design/methodology/approach
As is known, RBRDO is nested optimization, where the outer loop updates the design vector and the inner loop estimate the uncertainties. When solving the RBRDO, a large evaluation number of performance functions are needed. Aiming at this issue, the paper proposed a fast integrated procedure for solving the RBRDO by reducing the evaluation number for the performance functions. First, it transforms the original RBRDO problem into a series of RBRDO subproblems. In each subproblem, the objective function, constraint function and reliability index caused are approximated using simple explicit functions that solely depend on the deterministic design vector rather than the random vector. In this way, the need for extensive sampling simulation in the inner loop is greatly reduced. As a result, the evaluation number for performance functions is significantly reduced, leading to a substantial reduction in computation cost. The trust region method is then employed to handle the sequential RBRDO subproblems, ensuring convergence to the optimal solutions. Finally, the engineering test and the application are presented to illustrate the effectiveness and efficiency of the proposed methods.
Findings
The paper proposes a fast procedure of solving the RBRDO can greatly reduce the evaluation number of performance function within the RBRDO and the computation cost can be saved greatly, which makes it suitable for engineering applications.
Originality/value
The standard deviation of the original objective function of the RBRDO is replaced by the mean and the reliability index of the original objective function, which are further approximated by using Taylor series expansion and their approximate forms depend on the deterministic design vector rather than the random vector. Moreover, the constraint functions are also approximated by using Taylor series expansion. In this way, the uncertainty estimation of the performance functions (i.e. the mean of the objective function, the constraint functions) and the reliability index of the objective function are avoided within the inner loop of the RBRDO.
Details
Keywords
Jie Yang, Manman Zhang, Linjian Shangguan and Jinfa Shi
The possibility function-based grey clustering model has evolved into a complete approach for dealing with uncertainty evaluation problems. Existing models still have problems…
Abstract
Purpose
The possibility function-based grey clustering model has evolved into a complete approach for dealing with uncertainty evaluation problems. Existing models still have problems with the choice dilemma of the maximum criteria and instances when the possibility function may not accurately capture the data's randomness. This study aims to propose a multi-stage skewed grey cloud clustering model that blends grey and randomness to overcome these problems.
Design/methodology/approach
First, the skewed grey cloud possibility (SGCP) function is defined, and its digital characteristics demonstrate that a normal cloud is a particular instance of a skewed cloud. Second, the border of the decision paradox of the maximum criterion is established. Third, using the skewed grey cloud kernel weight (SGCKW) transformation as a tool, the multi-stage skewed grey cloud clustering coefficient (SGCCC) vector is calculated and research items are clustered according to this multi-stage SGCCC vector with overall features. Finally, the multi-stage skewed grey cloud clustering model's solution steps are then provided.
Findings
The results of applying the model to the assessment of college students' capacity for innovation and entrepreneurship revealed that, in comparison to the traditional grey clustering model and the two-stage grey cloud clustering evaluation model, the proposed model's clustering results have higher identification and stability, which partially resolves the decision paradox of the maximum criterion.
Originality/value
Compared with current models, the proposed model in this study can dynamically depict the clustering process through multi-stage clustering, ensuring the stability and integrity of the clustering results and advancing grey system theory.
Details
Keywords
In recent years, Chinese sentiment analysis has made great progress, but the characteristics of the language itself and downstream task requirements were not explored thoroughly…
Abstract
Purpose
In recent years, Chinese sentiment analysis has made great progress, but the characteristics of the language itself and downstream task requirements were not explored thoroughly. It is not practical to directly migrate achievements obtained in English sentiment analysis to the analysis of Chinese because of the huge difference between the two languages.
Design/methodology/approach
In view of the particularity of Chinese text and the requirement of sentiment analysis, a Chinese sentiment analysis model integrating multi-granularity semantic features is proposed in this paper. This model introduces the radical and part-of-speech features based on the character and word features, with the application of bidirectional long short-term memory, attention mechanism and recurrent convolutional neural network.
Findings
The comparative experiments showed that the F1 values of this model reaches 88.28 and 84.80 per cent on the man-made dataset and the NLPECC dataset, respectively. Meanwhile, an ablation experiment was conducted to verify the effectiveness of attention mechanism, part of speech, radical, character and word factors in Chinese sentiment analysis. The performance of the proposed model exceeds that of existing models to some extent.
Originality/value
The academic contribution of this paper is as follows: first, in view of the particularity of Chinese texts and the requirement of sentiment analysis, this paper focuses on solving the deficiency problem of Chinese sentiment analysis under the big data context. Second, this paper borrows ideas from multiple interdisciplinary frontier theories and methods, such as information science, linguistics and artificial intelligence, which makes it innovative and comprehensive. Finally, this paper deeply integrates multi-granularity semantic features such as character, word, radical and part of speech, which further complements the theoretical framework and method system of Chinese sentiment analysis.
Details