Predictive modeling of turning operations under different cooling/ lubricating conditions for sustainable manufacturing with machine learning techniques

Sustainablemanufacturingisoneofthemostimportantandmostchallengingissuesinpresentindustrialscenario.Withtheintentionofdiminishnegativeeffectsassociatedwithcuttingfluids,themachiningindustriesarecontinuouslydevelopingtechnologiesandsystemsforcooling/lubricatingofthecuttingzonewhilemaintaining machiningefficiency.Inthepresentstudy,threeregressionbasedmachinelearningtechniques,namely,polynomialregression(PR),supportvectorregression(SVR)andGaussianprocessregression(GPR)were developedtopredictmachiningforce,cuttingpowerandcuttingpressureintheturningofAISI1045.Inthedevelopmentofpredictivemodels,machiningparametersofcuttingspeed,depthofcutandfeedratewere consideredascontrolfactors.Sincecooling/lubricatingtechniquessignificantlyaffectsthemachiningperformance,predictionmodeldevelopmentofqualitycharacteristicswasperformedunderminimumquantity lubrication(MQL)andhigh-pressurecoolant(HPC)cuttingconditions.Thepredictionaccuracyofdevelopedmodelswasevaluatedbystatisticalerroranalyzingmethods.Resultsofregressionsbasedmachinelearning techniqueswerealsocomparedwithprobablyoneofthemostfrequentlyusedmachinelearningmethod,namelyartificialneuralnetworks(ANN).Finally,ametaheuristicapproachbasedonaneuralnetworkalgorithmwas utilizedtoperformanefficientmulti-objectiveoptimizationofprocessparametersforbothcuttingenvironment.


Introduction
Cutting fluids are traditionally used in metal cutting operations to improve the tool life, surface quality as well as entire machining process productivity. However, cutting fluids have negative effects on the human health and environment due to presence of potentially harmful chemicals [1]. In addition, the use of cutting fluids represents a considerable amount of total manufacturing costs [2]. Weinert et al. [3] demonstrates that the estimated cost of the cutting fluids is around 7 to 17% of the aggregate machining costs. Nowadays, conventional flood cooling is the most common cooling/lubricating technique used to improve machining performance. However, high cutting fluid consumption as well as power consumption, poor roughness in hard turning of EN 24T steel under dry and high pressure coolant jet machining environments. Apart from cutting conditions, cutting speed, feed rate and material hardness were used as the input variables. Different ANN architectures and several training methods were employed to determine the best predictive model. Mia and Dhar [27] formulate two predictive models of surface roughness, namely support vector regression and response surface methodology (RSM) in turning of AISI 1060 steel under dry and HPC conditions. The cutting speed, feed rate and material hardness were considered as input variables for model formulation. The results indicated that both methods can be utilized to predict the roughness value in dry turning, while the support vector regression model is preferable over RSM in HPC assisted turning. Kamruzzaman et al. [28] formed ANN model of cutting temperature in terms of cutting speed, feed rate, depth of cut, workpiece materials (C-60, 17CrNiMo4 and 42CrMo4) and cutting environments (dry, wet and HPC), and found 97.3% accuracy. Mia et al. [29] formulated ANN-based predictive model of surface roughness model for MQL-assisted hard turning, wherein cutting speed, feed rate and MQL flow rate were inputs. Their results indicate that ANN model is capable of preserving 97.5% accuracy. Mia et al. [30] utilized the SVR for the prediction of average surface roughness parameter with respect to spindle speed, feed rate, depth of cut and time gap between pulsing in MQL assisted turning of high hardness steel. Their results show that the developed model is able to predict the output responses with 95.04% accuracy. Abbas et al. [31] developed the regression models for the surface roughness and power consumption under dry, wet and nanofluid MQL-assisted turning of AISI 1045. Nouioua et al. [32] utilized response surface methodology and ANN technique to search for optimal prediction of predicting surface roughness and cutting force in turning of X210Cr12 steel according to cutting speed, feed rate and cutting depth under dry, wet and MQL machining conditions. ANN were found to be better than the response surface methodology model in the prediction of cutting parameters.
Machine learning techniques have been extensively utilized in the prediction of different machining responses in turning. However, presented models in literature mainly dealt with dry and wet cutting. Furthermore, a very few utilizable information is provided regarding the prediction of the cutting energy and as well as the cutting pressure under different cooling/ lubricating conditions. This study presents a prediction models development of machining force, cutting energy and cutting pressure in turning using three regression based machine learning techniques (polynomial regression, support vector machine and Gaussian process regression) as well as artificial neural networks. Contrary to other presented works, here the estimation of selected machining responses was carried out for different cooling/lubricating conditions. In particular, the study covered the MQL and HPC machining conditions. Selected machine learning techniques are moreover used for comparative assessment of machining responses in order to determine the best approach according to model accuracy and capability. In addition, multi-objective optimization problem was also carried out.

Experimental details
Straight turning of AISI 1045 (C45E) steel supplied as bars 120 mm in diameter and 300 mm long in a lathe Boehringer that develops a spindle power of 8 kW have been carried out by standard carbide inserts SNMG 1204 08 NMX. In this study, focus is placed on the application of various cooling/lubricating techniques in machining. Therefore, the experiments are conducted under different machining environments, namely MQL and HPC. The MQL and HPC systems were attached in the experimental setup during the machining trials.
For MQL assisted turning, cutting fluid was supplied to spray gun at the rate of 30 ml/h, which is mixed with compressed air (3 bar) in the mixing chamber of spray gun. Then the mixture of air and cutting fluid is supplied at the cutting zone by spray gun nozzle located Predictive modeling of turning operations 30 mm away from tool tip, at an angles of 908 and 308, from the cutting edge and clearance face, respectively.
During HPC assisted turning, the cutting fluid was supplied at a constant pressure of 50 bar and flow rate of 2 l/min through 0.4 mm nozzle (diameter) normal to the cutting edge at a low angle (about 5-68) with the cutting tool rake face. The nozzle was positioned 30 mm away from the tool tip with purpose to achieve fairly close to the tool-chip contact zone as well as to reduce the interference of the nozzle with the flowing chips.
Apart from different machining environments, three cutting parameters, that are cutting speed (v), depth of cut (a) and feed rate ( f ), were also selected as control factors. Referring to Table 1, the three levels of cutting speed, three levels of depth of cut and four levels of feed rate generate 36 (3 3 3 3 4) number of experimental runs for each of machining environment. The ranges of these parameters were selected based on the recommendations of the cutting tool manufacturer and in accordance with previous studies. Moreover, the parameter ranges were also extended in order to achieve higher productivity and to investigate machining responses in different machining environments.
The three components of the cutting force, namely, main cutting force (F c ), feed force (F f ) and passive force (F p ), were measured using the Kistler dynamometer type 9259A. The measurement chain further includes a charge amplifier (Kistler 5001), spectrum analyzer (HP3567A) and personal computer for data acquisition and analysis.
The machining force (F R ), cutting power (P c ) and cutting pressure (K s ) are computed from the following equations: The obtained experimental data were divided into two data sets, namely training data set for model development (75% of the entire data set) and test data set for model validation (25% of the entire data set). Thus, 27 sets of randomly selected experimental trials were used for model construction, leaving the remaining 9 sets of data to test model performance. Identical data partition scheme was utilized for MQL and HPC machining conditions. The detailed experimental conditions are listed in Table 2.
The experimental setup comprised of work material, cutting tool, MQL and HPC system and environment is shown in Figure 1.

Polynomial regression
Regression analysis is probably one of the most important aspects of statistical as well as machine learning based analysis. The objective of the regression analysis is to model the  [33]. The simplest approach to the regression task is linear regression, where the dependent (response) variable is modeled as a linear combination of the independent (input) variables. More advanced regression models include multiple regression analysis where dependent variable is also linearly related to the independent variables. An assumed linear relationship among dependent and independent variables might be inadequate to describe the particular relationship. Therefore, this paper deals with the task of polynomial regression. In polynomial regression model, the relationship between the dependent variable and independent variables is modeled in the form of a polynomial equation. Since polynomial regression models are considered as special cases of multiple linear regression models, fitting these models with least squares does not introduce any new problem and analysis of residuals can be utilized to determine the adequacy of the model.

No.
Cutting parameters

ACI
The general expression for a second-order polynomial (quadratic) model that represent the simplest extension of the straight-line model is given by where y is a dependent variable, x i and x j are independent variables, β 0 is fixed term, β i , β ii and β ij are the coefficients of linear, quadratic and cross product terms, respectively, and ε is random error.

Support vector regression
The support vector machine (SVM) is relatively novel algorithm based on the theoretical foundation of statistical learning theory proposed by Vapnik [34]. Known for its excellent generalization ability, robustness, small number of adjusting parameters, single global optimum solutions and no necessity for experimentation to finding the learning machine architecture SVM is perhaps the most accepted machine learning approach for supervised learning. SVM acts by producing a separating hyperplane maximizing the margin within two data sets in accordance to their classes which have been formerly mapped to a high dimensional space. The margin is established by creating two parallel hyperplanes on each side of the separating hyperplane. The larger the margin between the classes the better the generalization error of the classifier is achieved. Thus, an optimal separation (solution) is attained by the hyperplane which has the largest distance to the neighboring data points of two classes. The points on the boundary of the slab that are closest to the separating hyperplane are called support vectors. After the support vectors are selected, remain of the feature set can be excluded, because the support vectors involve all the indispensable information for the classifier ( Figure 2). SVM provides particular distinguishing features that make it an effective tool in modeling and prediction tasks with widespread application in many engineering areas. One of the major advantages of using SVM is that model can be determined by assigning a quite a few parameters, namely the kernel function, the loss function, the cost function etc. Furthermore, appropriate architecture does not have to be specified before training and SVM produce a unique solution after training. Although SVM was originally developed for the classification problems, this algorithm can be also implemented to regression problems by the introduction of a loss function that includes a distance measure. The method by which regression problems can be solved through SVM is known as support vector regression (SVR).
Considering a set of data points fx i ; y i g N i¼1 such that x ∈ R n is an input, y ∈ R is a corresponding target output to be estimated by the regression function, and N is the total number of data patterns. The idea of the regression problem is to find a function f(x) that has at most ε deviation from the actually obtained targets y i for all the training data. The nonlinear relationship between the output and the input can be described by a following regression function: where w is the weight vector, fðxÞ is a nonlinear function that map the input pattern x from R n into a higher-dimensional feature space and b is the bias term. The objective is to find values of unknown parameters that are the weight vector w and bias term b such that the values of x can be determined by minimizing the regression risk. Based on it, the regression problem can be formulated as follows if jy À f ðxÞj ≤ ε; jy À f ðxÞj À ε; otherwise: In the regularized risk function given by Eq. (6), C is regularized constant, L ε ðy; f ðxÞÞ is the loss function. The most common loss function, namely the linear loss function with ε-insensitivity zone, was proposed by Vapnik [34] s given by Eq. (7). The parameter ε is the difference between actual values and values computed from the regression function.
Introducing two positive slack variables ðξ i þ ξ * i Þ that represent the distance from actual values to the corresponding boundary values of the ε-tube, it is possible to transform Eq. (6) in a primal objective function given by following equation: subject to By adding Lagrangian multipliers this constrained optimization problem can be solved using the Eq. (10) ACI Eq. (10) is minimized with respect to primal variables w; b; ξ and ξ * , and is maximized with regard to non-negative Lagrangian multipliers α i ; α * i ; β i and β * i . Finally, using the appropriate Karush-Kuhn-Tucker conditions on Eq. (9) yields the following dual Lagrangian form of the optimization problem: After obtaining values of the Lagrange multipliers, an optimal solution of weight vector of the regression is represented by Eq. (13) Thus, the regression function can be written as follows: where K(x i , x j ) is the Kernel function which can be described in the feature space as following: The Kernel function can be substituted by any function satisfying the Mercer's condition. Several kernel functions can be used, such as the following: Gaussian or radial basis function (RBF) kernel, polynomial kernel and linear kernel. In this study, the radial basis kernel function has been chosen and this is given by: where σ is the kernel width parameter.

Gaussian process regression
Gaussian process regression (GPR) for machine learning was initially proposed by Williams and Rasmussen [35]. Compared with other regression techniques based on the kernel method, such as SVM, GPR is a probabilistic model based on the standard Bayesian approaches. GPR is very convenient to deal with complex problems of high dimensionality, nonlinearity and small number of training parameters. As a result of its very good performance, GPR has been widely applied in recent years in various fields of engineering. Assume that D ¼ fX; yg is a set of training data, where X ¼ ½x 1 ; x 2 ; . . . ; x n is an input vector in R d and y ¼ ½y 1 ; y 2 ; . . . ; y n is a vector containing scalar training outputs y i in R (extension to multiple outputs is possible). The output y i could be assumed to contain meanzero additive Gaussian noise with variance σ 2 n , which is also described pðε i Þ ¼ N ð0; σ 2 n Þ.

Predictive modeling of turning operations
Furthermore, assuming that the outputs are independent and identically distributed, each observation y i can be thought of as related to an underlying function f ðxÞ through a Gaussian noise model The joint distribution over the (noisy) outputs is a zero-mean Gaussian and has the following form pðf ðxÞjx 1 ; x 2 ; :::; . ., f n ] T s a vector of latent function values, Kðx; x 0 Þ is the covariance (kernel) matrix with elements K ij ðx i ; x j Þ, the term σ 2 n I introduces the Gaussian noise, and I is the identity matrix.
Given the training samples and a set of test points X * , the goal of GPR is to find the predictive outputs f * with probabilistic confidence levels. According to the definition of Gaussian process, a prior joint distribution of the training outputs f and test output f * can be formulated by following equation The independent likelihood can be formulated as follows Assuming that the hyper-parameters involved in K were learned from the training data in advance, the posterior distribution can be obtained to give the Gaussian predictive distribution where the mean m and the variance Σ are given by The squared exponential kernel function evaluates the covariance between the two input feature vectors x i and x j as [36] Kðx i ; where σ 2 s is the signal variance that quantifies the overall magnitude of the covariance value, usually initialized to 1, M is a diagonal matrix M ¼ diagfl 1 ; l 2 ; . . . ; l m g; l are scaling factors and δ ij is Kronecker's delta function that serves to selectively specify the noise variance σ 2 n to the covariance value.
The parameters of the kernel function denoted by θ ¼ ½M; σ s ; σ n are called the hyperparameters of the Gaussian process. These parameters can be learned by maximizing the log likelihood of the training outputs given the inputs where the log term in Eq. (25) can be expressed as This nonlinear optimization problem can be solved using numerical optimization techniques, such as gradient-based methods [37].

Artificial neural networks
In last few decades, machine learning techniques, mostly artificial neural networks (ANN), have caught the interest of many researchers in practically all engineering fields [38]. ANN have been inspired by the human brain information processing in an effort to achieve humanlike performance [39]. Because of nonlinear function approximation capability, noise resistance, adaptability and good generalization capability, ANN are especially useful for modeling of the machining processes characterized with many highly interrelated parameters. Different ANN models have been proposed in the literature, but the multi-layer perceptron (MLP) is the most widely used. MLP is a kind of feed-forward ANN consisting neurons divided into three type of layers: (i) input layer, (ii) output layer and (iii) hidden layers (one or more). Each layer containing a group of neurons (nodes) that are linked with neurons from other layers by connections between the neurons. Each neuron within the network is typically a simple processing unit where the basic calculations are performed to process one or more inputs and produced the proper outputs. Links between neurons, or a synapses, have an associated weight which control the output of the neuron. The outputs of the ANN can be modified by adjusting the values of synaptic weights. These adjustment of weights is designed to be in a direction that minimize the difference among ANN output and present response vectors. The back-propagation (BP) algorithm is probably one of the most popular techniques in the field of ANN. Thus, in the present study, multi-layer feed forward ANN based on BP algorithm is selected to develop the prediction of the model. Figure 3 shows a typical BP network architecture containing one input layer, one hidden layer and one output layer. Input layer consists of a set of neurons representing the process inputs features. The number of hidden layers as well as the number of nodes per hidden layer is usually determined through a trial and error method, by increasing or the number of hidden layer and neurons during training. The last layer acts as the network output layer. The number of neurons in the last layer that acts as the network output layer is equal to the number of functions being approximated by the model.

.1 Predicting responses using machine learning techniques
In this section, four predictive models, namely polynomial regression (PR), support vector regression (SVR), Gaussian process regression (GPR) and artificial neural networks (ANN) were developed and compared on the basis of their prediction accuracy.
First, from a set of experimentally obtained data, a polynomial regression models to estimate the machining force, cutting power and cutting pressure as a function of machining parameters such as cutting speed (v), depth of cut (a) and feed rate ( f ) in MQL assisted turning are determined by F R MQL ¼ 929:3217 À 1:7317v À 266:9927a À 532:4534f þ 0:6772va þ 2146:3422af (27) P c MQI ¼ 7:3469 À 0:0192v À 3:9496a À 17:379f þ 0:0118va þ 0:0502vf þ 10:1892af (28) Among 36 data sets of process variables related to MQL machining condition, 27 data sets have been utilized with the purpose to create regression equations, while 9 data sets were reserved to test the established equation's predictive capacity. Identical approach was applied in the case of experimental data sets found on HPC machining condition. The analysis of variance (ANOVA) was utilized to justify the significance of the developed regression models. This analysis was carried out for a significance level of α 5 0.05, i.e. for a confidence level of 95%. ANOVA results shows that the developed mathematical models for machining force, cutting power and cutting pressure are adequate, irrespective of the cutting condition (MQL or HPC). For SVR-based models Statistics and Machine Learning Toolbox from Matlab was used. The selected machining parameters for SVR inputs were the cutting speed, depth of cut and feed rate, whereas the quality responses were machining force, cutting power and cutting pressure. The type of kernel function, kernel function parameters, the value of C, and the value of ε for the ε-insensitive loss function are the most important factors which have significant effects on the performance of the SVR-based model. Optimized values of these parameters can be determined by using various methods, such as cross validation, grid search, genetic algorithm, particle swarm optimization, etc. In present work, RBF kernel was selected for developing all SVR-based models. The kernel scale was estimated automatically using a heuristic subsampling procedure, which was empirically determined by built-in Matlab functions. For median-sized problems the grid search method is an efficient technique for determination the optimum values of the SVR-based model. In this method, the parameters are varied by fixed step-sizes across a range of values, and the performance of each set of parameters is compared by using different statistical measures such as maximizing correlation coefficient, minimizing normalized root mean square error or mean absolute percentage error, etc. With purpose to improve the generalization ability, this method can use cross validation process. In the present study, the selection of the optimal values of C and ε for each SVR-based model was determined employing grid search method.

ACI
The Matlab Statistics and Machine Learning Toolbox was also used for creating and testing the different GPR-based models of the machining force, cutting power and cutting pressure in MQL and HPC machining condition. The determination of appropriate values of hyperparameters is crucial step for the prediction capabilities of any GPR-based model. Therefore, the grid search method was utilized in order to solve that problem. There are several key parameters and functions influencing the GPR-based model. For this study, the parameters such as kernel function, basis function and initial value for the noise standard deviation of the GPR model were optimized to obtain the best model configuration for the prediction of selected quality responses.
In ANN model, a back-propagation algorithm was used to predict the machining responses, where the error for hidden layers is determined by propagating back the error determined for the output layer. As previously mentioned, the numbers of the neurons in the input and output layer are governed by the numbers of input and output variables, respectively. However, the number of hidden layers and the favorable number of neurons in each hidden layer are subject to the complexity of the target function, generalization capabilities, computation time required for training, the risk of over-fitting, etc. Therefore, network optimization was performed by adjusting the number of hidden layers and the number of nodes in these layers through a trial and error method, in order to adjust the converged error. After examining different neural networks architectures, the result showed that network structure with one hidden layer and nine neurons was found to be accurate and reliable in the present investigation. Among different training methods, the Levenberg-Marquadt was selected as training algorithm, because it consumes less memory and require fewer computation time. The hyperbolic tangent sigmoid transfer function has been used between the input and hidden layers and a linear transfer function have been utilized between the hidden and output layers. The network was trained for 10,000 epochs with a learning rate of 0.03 and a momentum term of 0.1, while error between the desired and the actual outputs is less than 0.001 at the end of training process.
The prediction accuracy of the developed models was analyzed in terms four statistical measures: mean absolute percentage error (MAPE), maximum absolute percentage error (MaxAPE), the mean absolute error (MAE) and normalized root mean square error (NRMSE). Furthermore, coefficient of determination (R 2 ) was also used as performance metric. These statistical metrics are defined as follows: Predictive modeling of turning operations where n is number of training pattern, T i and Y i are experimental and predicted result of ith training pattern, respectively, T i and Y i are the average values of experimental and predicted result, respectively, and σ is standard deviation of T i .
MAPE, MaxAPE, MAE and NRMSE were used to measure the deviation between the observed and predicted values. The smaller the values of these performance metrics, the closer were the predicted values to the observed values. The statistical metric R 2 was used to measure the correlation between the observed and the predicted values. A value of 1 indicates a perfect relationship between the two variables. Performance evaluations of different machine learning models in terms of these five statistical measures for the test data set are shown in Table 3. Furthermore, to illustrate obtained results more clearly, a performance comparison of machine learning techniques for machining force, cutting power and cutting pressure prediction are also shown in Figures 4-6, respectively.
From the comparative results of Table 3  .5%. The highest values of these two metrics were found in modeling of cutting power using PR. It is evident that SVR and GPR method outperformed PR, while ANN slightly outperformed both SVR and GPR models for all performance characteristics. A similar conclusion can be drawn also with regard to MAE and NRMSE analysis. It was also found that the coefficient of determination for the predicted machining force and cutting power values is found to be a very high (above 0.99), noting slightly lower values in modeling of cutting pressure (above 0.96). According to presented results for HPC assisted turning, MAPE values are within the range 0.6-2.1% and MaxAPE values are identified to be between 1.3 and 6.7%. The highest values of MAPE and MaxAPE were found in modeling of cutting power when PR is employed, just as in the MQL cutting condition. The results of these two metrics, as well as MAE and NRMSE, again confirm the superiority of using SVR and GPR over PR method. In addition, it was found that ANN is even slightly accurate compared to SVR and GPR methods when estimating all responses. The determination coefficient for the estimated machining force and cutting power values were both very close to 1 (above 0.99), while somewhat lower values (above 0.94) were observed in estimating of cutting pressure.
Based on the analysis of the results, it is obvious that all considered machine learning techniques are accurate, efficient and practical tool for estimation of machining force, cutting energy and cutting pressure under different cooling/lubricating conditions. The comparison of regression based machine learning techniques shows that SVR and GPR models have similar performances and have better outcomes in terms of accuracy than PR-based models. Moreover, the ANN-based models were even somewhat more accurate compared to the regression based machine learning techniques for all datasets. Despite the fact that the SVR, GPR and ANN outperformed PR-based method, results produced by this method have been quite satisfactory. Additionally, in terms of computational time, PR method is the fastest because the training does not require much parameter tuning.

Multi-objective optimization
The relationships between machining parameters and responses have been established using different machine learning methods. However, SVR, GPR and ANN-based have been characterized by complex non-linear functions, thus conventional optimization methods are difficult to use effectively and consistently. On the contrary, PR-based models are simple, yet effective, easy to interpret and broadly applicable to relate input parameters and output parameters. Therefore, the mathematical relation between machining variables and responses for MQL and HPC machining environment, given in Eqs. (27)- (29) and Eqs. (30)- (32), respectively, were used as functional equations to establish the objective functions. The objective has been to find the optimal machining condition for minimizing the machining force, cutting power and cutting pressure simultaneously. Hence, for multi-objective optimization of the turning under MQL and HPC cutting conditions, the following complex objective functions (COF) were developed where w 1 , w 2 and w 3 are the weight values of the machining force, cutting power and cutting pressure, respectively. In this study, equal weights for all responses were selected, i.e. w 1 5 w 2 5 w 2 5 1/3. The minimization of the developed complex objective functions should be performed on the basis of operation constraints. These constraints impose the lower and upper limits of the experimental parameters and are given as follows: For the purpose of solve such types of problems, different metaheuristic methods such as the genetic algorithm, particle swarm optimization, differential evolution, simulated annealing, etc. are commonly applied. In present research, a relatively new optimization algorithm, namely neural network algorithm was employed for the multi-objective optimization of the MQL and HPC assisted turning. This method is developed based on the structure and concept of artificial neural networks in terms of generating new candidate solutions and also employing other operators used in the conventional ANN. More details about this optimization algorithm can be found in [40].
The minimum values of the complex objective function COF MQL 5 1.147 and COF HPC 5 1.095 were found after 8 iterations and 5 iterations, respectively. The optimization algorithm results revealed that for both cutting environments the best combination of machining parameters in simultaneously optimizing the machining force, cutting power and cutting pressure was: 210 m/min for cutting speed, 1.5 for depth of cut and 0.224 mm/rev for feed rate. Thus, the experimental trial 1 has the optimal combination of machining parameters for MQL as well as for HPC assisted turning.

Conclusion
The present study revealed comparative analysis of four machine learning methods: polynomial regression, support vector regression, Gaussian process regression and artificial neural networks, for machining force, cutting power and cutting pressure prediction in the turning of AISI 1045 using coated carbide tools. In the developed models, the input data include cutting parameters, such as cutting speed, depth of cut and feed rate. The prediction of selected quality characteristics was carried out for two different machining environments. To be specific, the study covered the minimum quantity lubrication and high-pressure coolant assisted turning. The performance of four methods were evaluated in terms of ACI different statistical measures such as mean absolute percentage error, maximum absolute percentage error, the mean absolute error, normalized root mean square error and correlation coefficient and very good agreements with experimental results were observed.
The developed prediction models of machining force, cutting power and cutting pressure have exhibited a very high accuracy in prediction for MQL as well as for HPC machining environment. According to presented results, MAPE values are within the range 0.7-2.7% and 0.6-2.1%, whereas MaxAPE values varies from 1.2 to 9.5% and from 1.3 to 6.7%, for MQL and HPC cutting environment, respectively. The highest values of these two statistical measures were observed in modeling of cutting power in both cutting environment when PR is employed. The comparison was also done using MAE and NRMSE as the performance measures. When regression based machine learning techniques are compared, it is found that SVR and GPR models have comparable performances and that these models obtain relatively better accuracy than those achieved by PR-based model. Moreover, the results revealed that ANN-based model have slightly better outcomes in terms of accuracy than regression based machine learning methods when estimating quality characteristic for both cutting environment. Summarizing the main features of the statistical results, it can be concluded that selected machine learning techniques produce adequate results when compared to the experimental outcomes. The estimated machining parameters on test data set were found to be closely correlated with the real performance results. Thus, by using developed models acceptable results can be estimated rather than experimentally obtained which consequently reduces the testing cost and time.
Moreover, mathematical models of the multi-objective optimization were established based on the polynomial regression method and metaheuristic approach based on a neural network algorithm was used to obtain optimal solutions. The optimal combination of machining parameters for both cutting environment based on the studied performance criteria (i.e. machining force, cutting power and cutting pressure) was found to be 210 m/min for cutting speed, 1.5 for depth of cut and 0.224 mm/rev for feed rate.
The performance can additional be enhanced with a wide range of cutting conditions, taking into account other major aspects of cutting operations such as tool coatings, tool geometry, workpiece materials, etc., as well as considering the additional quality characteristics.