Wholesale price forecasts of green grams using the neural network

Purpose – Agriculture commodity price forecasts have long been important for a variety of market players. ThestudyweconductedaimstoaddressthisdifficultybyexaminingtheweeklywholesalepriceindexofgreengramsintheChinesemarket.Theindexcoversaten-yearperiod,fromJanuary1,2010,toJanuary3,2020,andhassignificanteconomicimplications. Design/methodology/approach – Inordertoaddressthenonlinearpatternspresentinthepricetimeseries, we investigate the nonlinear auto-regressive neural network as the forecast model. This modeling technique is able to combine a variety of basic nonlinear functions to approximate more complex nonlinear characteristics. Specifically, we examine prediction performance that corresponds to several configurations across data splitting ratios, hidden neuron and delay counts, and model estimation approaches. Findings – Our model turns out to be rather simple and yields forecasts with good stability and accuracy. Relative root mean square errors throughout training, validation and testing are specifically 4.34, 4.71 and 3.98%, respectively. The results of benchmark research show that the neural network produces statistically considerably better performance when compared to other machine learning models and classic time-series econometric methods. Originality/value – Utilizing our findings as independent technical price forecasts would be one use. Alternatively,policyresearchandfreshinsightsintopricepatternsmightbeachievedbycombiningthemwith other(basic)predictionoutputs.


Introduction
The agricultural and resource industries' projections of commodity prices are crucial to a wide range of market participants, including processors, speculators, the media, hedgers, policy makers and economists (Li et al., 2021;Xu, 2017cWang et al., 2020;Yin et al., 2020;Xu and Zhang, 2023i;Drachal and Pawłowski, 2021).Producers usually need price projection information in order to determine sales pricing before manufacturing begins; in order to fulfill their contractual commitments, exporters and processors also require it; it is necessary for speculators to make money; hedgers require it in order to control risks and policymakers require it to develop, oversee and evaluate strategic plans and initiatives (Holt and Brandt, 1984;Xu and Zhang, 2022b;Rahman et al., 2013;Esther and Magdaline, 2017;Vishwajith et al., 2014;Padhan, 2012).The price forecasting problem with green beans (also called green grams or mung beans) in China shouldn't be an anomaly given its major market share and deep integration into financial systems (Wen and Wang, 2004;Wang and Ke, 2005).

Asian Journal of Economics and
Banking learning models investigated in earlier research.These results, together with those discussed in Bayona-Or e et al. (2021) and Majid (2018), appear to suggest that neural networks are possibly the most often employed machine learning model for agricultural commodity price prediction; nevertheless, this is by no means a comprehensive analysis.Neural networks show considerable promise for predicting noisy and chaotic time series data in a range of contexts (Xu and Zhang, 2023j, q;Karasu et al., 2017a, b), including the financial and economic sectors (Xu and Zhang, 2023d, s;Kumar et al., 2021;Xu, 2015bXu, , 2018a, b, d;, b, d;Yang et al., 2008Yang et al., , 2010;;Wang and Yang, 2010;Karasu et al., 2020;Wegener et al., 2016), according to different studies.Their ability to foresee and recognize nonlinear patterns (Xu and Zhang, 2021c;Altan et al., 2021;Xu, 2018c) in a variety of time series (Xu and Zhang, 2021d, 2023a, 2024b;Abraham et al., 2020;Zhan and Xiao, 2021) through self-learning (Xu andZhang, 2023n, 2024d;Karasu et al., 2020) may be helpful in this respect.In this case, we employ a neural network to predict the price of green beans, a crucial agricultural commodity on the Chinese market.
To perform our research, we investigate the nonlinear auto-regressive neural network as the forecast approach since it could accurately model a wide range of nonlinear features.We also make use of the weekly green bean wholesale price index, which shows nonlinear trends over a ten-year period from January 1, 2010 to January 3, 2020.We investigate the prediction accuracy associated with various combinations of model estimation techniques, hidden neuron and delay counts, and data splitting ratios.Our approach is quite simple and yields highly consistent and accurate projections.We compare the developed neural network to conventional time-series econometric models in a benchmark examination, including a nochange model, a autoregressive model and an autoregressive-generalized autoregressive Asian Journal of Economics and Banking conditional heteroskedasticity model, and to other different types of machine learning models, including a regression tree model and a SVR model, which demonstrates that when compared to the five alternative models, the neural network generates forecast accuracy that is statistically considerably greater.This is the first examination of forecasts regarding green bean wholesale price in China, as far as we are aware of, taking into account the preceding studies described above.It is impossible to overestimate the importance of quick and accurate commodity price estimates for market participants and policy makers as they enable quick portfolio adjustments, risk management and market assessments.By employing weekly data for wholesale applications to analyze commodity price forecast challenges, the current study expedites decision making.Firstly, our results might be applied as separate technical price forecasts.To develop opinions on price trends and carry out policy research, they might also be utilized in conjunction with other (basic) forecast results.

Literature review
Econometrics experts have dedicated a considerable deal of time and effort to developing accurate and dependable commodity price forecasts for the resource and agriculture sectors.The ARIMA, for example, has shown to be highly popular in prior research and is still actively sought after for a wide variety of time series forecast assignments.The ARIMA was found to considerably outperform estimates based on expert opinion and structural models for the US hog and cattle markets (Brandt andBessler, 1981, 1983;Bessler and Brandt, 1981).
It was discovered after thorough research that there is very little potential for accuracy increases for hog price estimates when switching from ARIMA models to ones that use extra sow farrowing cost data (Brandt andBessler, 1982, 1984;Kling and Bessler, 1985;Bessler, 1990).On the other hand, more data from the exchange rate series was demonstrated to raise model performance for wheat prices, improving the ARIMA model's forecast accuracy (Bessler and Babula, 1987).Additionally, some studies suggested that combining the ARIMA with multiple model types rather than depending only on one input source might increase prediction accuracy (Bessler and Chamberlain, 1988;McIntosh and Bessler, 1988).The VAR is yet another important econometric tool for price series forecasts (Bessler and Hopkins, 1986;Xu and Zhang, 2023b;Chen and Bessler, 1987;Bessler and Brandt, 1992;Awokuse and Yang, 2003;Rezitis, 2015), building upon the correlations between several economic variables (Long et al., 2021;Ashikuzzaman, 2022;Sugita, 2022;Baba and Sevil, 2020).Testing both models for the prediction problem of US cotton prices revealed that the VAR tended to perform better than structural models during periods of usual price volatility (Chen and Bessler, 1990).Research demonstrated the value of the VAR in distinguishing the predictive power of a set of global wheat futures prices (Yang et al., 2003) and a set of regional soy and soybean prices in the US (Babula et al., 2004).The long-term link(s) between economic variables are further included into the VECM by cointegration, in close association with the VAR (Ngong et al., 2023;Duong, 2023;Chettri et al., 2022;Yussuf, 2022); it might be quite helpful for long-term price forecasts.(Yang and Leatham, 1998;Yang and Awokuse, 2003;Xu, 2019a, c;Yang et al., 2021).In terms of predicting wheat prices internationally, for example, it was shown that the VECM generally performs better than the VAR (Bessler et al., 2003).It was also shown that using the VECM instead of certain other models had a general benefit for several different agricultural price series (Wang and Bessler, 2004).
The previously described econometric models have shown promise in research focusing on price predictions, particularly for green beans.For instance, Kumar (2019) and Chaudhari and Tingre (2014) discovered that the ARIMA could be used to predict the prices of green grams in Odisha and Maharashtra, respectively.According to Hossain et al. (2006), the ARIMA is appropriate for predicting mung prices in Bangladesh.Dongo (2007) used the AJEB ARIMA and VAR to analyze price estimates for soybeans and green beans on the Chinese futures market.It was found that while the ARIMA produces reasonable accuracy, the VAR produces better results.De Silva and Herath (2016) used the ARIMA with generalized autoregressive conditional heteroscedasticity (GARCH) to study retail price projections of seventeen agricultural commodities in Sri Lanka, including green beans, carrots, and cabbages.They discovered that for the majority of the commodities they looked at, the autoregressive moving average (ARMA)(1,1)-GARCH (1,1) is enough.For green grams' pricing, Pani et al. (2019) found similar empirical results.
The application of machine learning techniques for commodity price forecasts has advanced, as the literature has shown, and the situation with green beans is no exception.For instance, Xiong et al. (2018) used a hybrid strategy that combines seasonal-trend decomposition and extreme learning machines to investigate price forecast difficulties of cabbages, green beans, tomatoes, cucumbers and peppers in China.The created ensemble forecasts outperformed the individual models they evaluated, such as the SVR, seasonal ARIMA, and time-delay neural network.Similarly, Li et al. (2014) used the Hodrick-Prescott filter and neural network to investigate price forecast difficulties of cabbages, green beans, tomatoes, peppers, and cucumbers in China.They found that the hybrid model outperforms both the neural network alone and the ARIMA.Several possible machine learning models, such as neural networks, SVRs and extreme learning machines, were investigated by Zhang et al. (2020) for the purpose of predicting the prices of various agricultural commodities in China, including pig grains and beans.When creating prediction models, they focused on feature selection and proposed that the best models may be contingent on the commodities in question and the time series characteristics of their pricing.
Though recent research has suggested hybrid and combination models for financial and economic predictionswhose underlying data are typically complex and involve nonlinear behaviorthe focus is not always on price projections for agricultural commodities (Prananta and Alexiou, 2023).For instance, Liu et al. (2024) recommended combining the dynamic conditional correlation version of the generalized autoregressive conditional heteroscedasticity model with the neural network model for the purpose of predicting the Bitcoin trading series.Mahmoodi et al. (2023) suggested combining the SVR with the particle swarm optimization for candlestick technical analysis.Their approach was demonstrated to be more effective than the neural network in some particular circumstances.Another direction of research is to compare prediction performance of different machine learning techniques.For example, Maneejuk et al. (2023) compared the backpropagation version and extreme learning machine version of the neural network for forecasting Chinese stock prices through convertible bonds.
Since there has been little research done on the difficult forecasting task of predicting the price series of green grams in the Chinese wholesale marketa crucial agricultural commodity price indicator that has substantial economic significance for both policymakers and market participantsand because the literature indicates that neural networks have a lot of potential for time-series data forecasting, this study aims to close this gap by building a neural network model that will provide reliable and accurate prediction results, which will aid in price trend analysis and decision-making processes.

Data
We analyze data from the weekly price index of green beans on the Chinese market over a tenyear period, starting on January 1, 2010, and ending on January 3, 2020.Figure 1 shows the price index and first differential series for green beans in the top and lower left panels.In the top and bottom center panels of Figure 1, we have additionally provided forty-bin histograms and kernel estimates of the price and its 1st differences in order to show the data distributions.

AJEB
The price and its 1st differences are plotted as quantiles against the conventional normal distribution in the top and lower right panels of Figure 1.Table 1 displays the price data's summary statistics.That the price and its 1st differences are not normally distributed is not surprising, as is to be anticipated for time series in the financial and economic domains, as Figure 1 and Table 1 illustrate (Xu, 2017b(Xu, , 2019b;;Yin and Wang, 2021a, b;Xu andZhang, 2022c, 2024c;Wenjing and Gang, 2021).It is important to remember that the base period price index, which is based on the average weekly price for June 1994, has a value of 100.Each weekly observation of the index is equivalent to the price of fifty kilos of green beans.Notably, we see that the data contain a number of intermittently missing observations, for which we approximate the data using cubic spline interpolation.30, 123.74, 122.94, 145.63, 147.09, 167.63, 160.97, 181.51, 148.20, 157.83, 168.08, 158.53, 166.54, 149.75, respectively.The estimated pricing observations have been taken into account by the plots in Figure 1 and the summary statistics in Table 1.The price index of green beans, with the missing data estimated, is used in the following analysis.

Method
Our method of price prediction uses a non-linear autoregressive neural network, which may be depicted as follows: y t 5 f(y tÀ1 , . .., y tÀd ).The variables y, t, d, and f represent, in that order, the value of the green bean price index to be predicted, time (in weeks), number of delays (in weeks) and model's functional form.Neural networks have been explicitly shown in the literature to offer a significant lot of potential for forecasting noisy and chaotic time series data in many scenarios (Karasu et al., 2017b), including the fields of finance and economics (Kumar et al., 2021;Yang et al., 2008Yang et al., , 2010;;Wang and Yang, 2010;Karasu et al., 2020;Wegener et al., 2016).Their ability to foresee independently (Karasu et al., 2020) and find patterns that are not linear (Altan et al., 2021) throughout various time series (Abraham et al., 2020;Zhan and Xiao, 2021) should be helpful in this sense.This modeling method, in particular, may be used to anticipate complicated time series data by mixing a variety of fundamental non-linear functions to mimic advanced nonlinear properties (Yang et al., 2008(Yang et al., , 2010;;Wang and Yang, 2010).Our attention is on the forecast for the upcoming week.We use the feedforward network design consisting of two layers.While the hidden layers make use of sigmoid transfer functions, the output layer uses a linear one.We assess both the Levenberg-Marquardt (LM) (Levenberg, 1944;Marquardt, 1963) and scaled conjugate gradient (SCG) (Møller, 1993)  Asian Journal of Economics and Banking comparisons between these two approaches in some of the previous research (Xu andZhang, 2022g, 2023c, u;Baghirli, 2015;Al Bataineh and Kaur, 2018).

LM algorithm
By using the LM approach, the second-order training speed is approximated without having to compute the expensive Hessian matrix (H) (Paluszek and Thomas, 2020;Xu and Zhang, 2023o).An representation of the selected approximation may be H 5 J T J, where , using, as an illustration, a scheme in which the weights are identified as w 1 and w 2 , for a non-linear function -E($)applied to reveal the sum square error with . The expression g 5 J T e represents a gradient, while e displays an error vector.The w kþ1 ¼ w k − J T J þ μI ½ −1 J T e rule is used to update the weights and biases, where μ stands for the combination coefficient while I stands for the identity matrix.Newton's technique will be similar to the procedure for μ 5 0; However, when μ is large, the process will switch to a gradient descent one with small step sizes.μ would decrease as a result of decreased demand for a quicker gradient drop after successful steps.While keeping many of the benefits of steepest-descent algorithms and Gauss-Newton approaches, the LM algorithm also avoids many of their shortcomings.It could, in particular, successfully tackle the issue of slow convergence (Hagan and Menhaj, 1994;Xu and Zhang, 2022a).

SCG algorithm
Backpropagation methods change the weights in the steepest fall, which does not necessarily represent the fastest convergence, even if the performance function would drop down quickly in that direction.By conducting searches along the conjugate route, conjugate gradient algorithms frequently produce faster convergence when compared to the steepest descent.
Most algorithms use the learning rate to calculate how long the updated weight step size should be.
Step size is one of the things that conjugate gradient algorithms iterate over.Thus, the search is carried out in the conjugate gradient direction to ascertain the step size for reducing the performance function.Additionally, one may utilize the SCG approachwhich is completely automated and supervisedinstead of the laborious line searches associated with conjugate gradient methods.This methodology is faster than LM backpropagation (Xu andZhang, 2022d, 2023r).In particular, it was shown that on a basic multilayer perceptron structure with two hidden layers, the SCG technique performed better than the LM algorithm (as indicated by the average training iteration) (Batra, 2014).The second-order term is comuted using the SCG technique, s k , in this way: , where the gradient is signified by E 0 ($), the weight vector in the real euclidean space is signified by w, the nonzero weight vector of the conjugate system in the real euclidean space is signified by p, the index is signified by k, and λ k (also known as the Lagrange Multiplier) and σ k are utilized to show two scaling factors.The step size, α k , is expressed as , where E qw ($) represents E($)'s quadratic approximation.
In addition to the two different approaches used, we investigate alternative model setups regarding the number of hidden neurons and delays, as well as data division ratios.We consider, in particular, 2, 3, 5, and 10 for the number of hidden neurons used; 2, 3, 4, 5 and 6 for AJEB the number of delays applied; and 70%-15%-15%, 60%-20%-20% and 80%-10%-10% for the ratio employed to divide the price data into training-validation-testing stages.Every model scenario that was examined is included in Table 2. Setting #49 has been selected with regard to green bean prices.Its base is the LM method and a training-validation-testing data splitting ratio of 60%-20%-20%.Six delays are employed along with two hidden neurons.When the model training is finished, it is decided by the number of validation tests and the magnitude of the gradient.The gradient will become narrower when training reaches a point where performance reaches its lowest point.Upon reaching a gradient magnitude below 10 -5 , training will cease.There exists a correlation between the number of validation checks and the amount of iterations in which the validation performance does not decrease.Following the completion of the six validation checks, this course will come to an end.Furthermore, training will terminate with the achievement of the designated 1,000 training epochs, commonly known as training iterations.For the LM technique, 0.001 is the initial μ (combination coefficient); 0.1 is the decline factor; 10 serves as the increase factor; and μ 0 s maximum value is fixed at 10 10 .The weight change determinant linked to second derivative approximations for the SCG technique is 5 3 10 -5 , and 5 3 10 -7 is the parameter used to regulate the Hessian's indefiniteness.

Result
We assess each model setup in the Table 2 for green bean price indices.Relative root mean square error (RRMSE) is the performance statistic that we calculate for each configuration at the training, validation, and testing sessions.Comparisons of different prediction results between models or objectives are made possible by the RRMSE (Jamieson et al., 1991; Model configurations

Algorithms
Levenberg-Marquardt (LM) algorithm 1þ2v (v 5 0,1, . ..,59) Scaled conjugate gradient (SCG) algorithm 2þ2v (v 5 0,1, . ..,59)Heinemann et al., 2012;Li et al., 2013;Despotovic et al., 2016).A numerical representation of the RRMSE results is shown in Figure 2. Applying the LM algorithm and adopting a ratio of 60%-20%-20% for data segmentation across training, validation and testingthe configuration #49 (six delays and two hidden neurons) holds the most data for the testing part out of the 3 data segmentation ratios taken into considerationwe ultimately decide the configuration by making balance between accuracy and stabilities.The selected model configuration is shown by the red arrow in Figure 2, and we can observe that, in that specific arrangement, the triangle for the testing part, the square for the validation part and the diamond for the training part are all somewhat rather close to one another.Others, which result in higher RRMSE results for the remaining sub-samples but a lower RRMSE result for one particular subsample, reveal lesser stabilities as compared to the one selected.For example, during training, option #41's RRMSE is somewhat lower than setting #49's, but it increases throughout validation and testing.Put differently, choosing setting #49 results in improved prediction stabilities as compared to option #41.Our goal is to avoid under-or overfitting by selecting a model configuration for the green bean price index that performs relatively consistently throughout the training, validation and testing stages.We adjust one parameter at a time to investigate performance sensitivity to alternative configurations after deciding on a configuration for the green bean price index.Previous research has offered the following standards for evaluating the forecast accuracy of a model (Jamieson et al., 1991;Heinemann et al., 2012;Li et al., 2013;Despotovic et al., 2016): when one reaches RRMSE < 10%, forecasting performance needs to be verified as excellent; when one reaches 10% < RRMSE < 20%, forecasting performance needs to be verified as good; when one reaches 20% < RRMSE < 30%, forecasting performance needs to be verified as fair; and when one reaches RRMSE > 30%, forecasting performance needs to be verified as poor.Using these rating criteria, the model developed here has excellent forecast accuracy, with an RRMSE of 3.98% during the testing phase.As seen in Figure 3, the configuration that was selected results in rather consistent performance across the testing, validation and training part.Figure 3 illustrates how configuration #49, which employs the LM algorithm, and #50, which utilizes the SCG, compares.It is clear from this comparison that the LM approach frequently yields more accuracy than the SCG.This finding is consistent with the research (Batra, 2014;Xu and Zhang, 2022j), which indicates that even though the SCG technique is probably faster, the LM approach is probably going to be more accurate on the multi-layer perceptron structure with two hidden layers.In our particular case, the LM algorithm takes 3.040497 s to execute and the SCG algorithm takes 0.655436 s.The results generally hold up well to modifications in data splitting ratios, as seen by the lack of substantial differences in overall performance across configurations #49 and #9 and #89.
Based on the selected model parameter, we offer a thorough visual depiction of the expected results and prediction errors for the green bean price index in Figure 4 during the training, validation and testing phases.In summary, the selected configuration appears to produce dependable and precise outcomes, suggesting the neural network's possible application as a forecasting instrument for the weekly price index under examination.More specifically, we can see that projected prices from the top panel of Figure 4 closely mirror observed ones during the training, validation and testing phases.The bottom or center panel of Figure 4 allows us to see that there is no issue with consistently under-or overpredicting during the three phases.Figure 4's center or bottom panel shows that some of the forecast mistakes are relatively larger during certain short subperiods with dramatically greater price volatilities.This finding might not be too surprising, as the model still typically captures the price changes throughout these subperiods properly.Furthermore, error auto-correlation analysis was performed (details available upon request).With the exception of the lag of 10, for which there is a minor violation of the confidence bounds, the results demonstrate that, up to a lag of 20, all auto-correlations associated with various delays stay within the 95% confidence limits.This little violation would not have occurred if the 99% confidence limits had been applied.Error auto-correlation research results generally indicate that the selected configuration is appropriate.The existence of nonlinearities for time-series data in the financial and economic domains at higher moments has been widely confirmed by literature (Yang et al., 2008(Yang et al., , 2010;;Xu and Zhang, 2023l;Xu and Zhang, 2022e;Wang and Yang, 2010;Karasu et al., 2020).In this scenario, we execute the BDS test (Brock et al., 1996;Fujihara and Mougou e, 1997;Dergiades et al., 2013) on the weekly price index of green beans, using a range of embedding dimensions and distances to measure the proximity of data points.We discover that all test p À values are almost 0. The pricing data have nonlinear properties, according to these findings.This circumstance is advantageous for neural networks, since they may utilize historical data to forecast (Karasu et al., 2020) and detect nonlinearities (Altan et al., 2021) in commodity prices.More accurately than some other approaches that account for nonlinearities via a single nonlinear function, neural networks may simulate a wide variety of functions utilizing a class of multi-layer networks that integrate numerous fundamental non-linear functions (Yang et al., 2008(Yang et al., , 2010;;Wang and Yang, 2010).We show how to utilize a neural network to estimate the weekly price index of green beans, and we are able to achieve very good prediction stabilities and accuracy.

Benchmark analysis
Thus far, we have concentrated on neural networks in our work.The no-change model, autoregressive model (AR), regression tree model (RT), SVR model and AR-generalized autoregressive conditional heteroskedasticity model (AR-GARCH) are the five different benchmark models that are examined in this analysis.In addition to the RRMSE, the forecast mean squared errors (MSEs) of two models are compared using the modified Diebold-Mariano test, often known as the MDM test (Diebold and Mariano, 2002;Harvey et al., 1997), to assess the prediction accuracy of these different models.The basis for the development of the MDM test is: , where error M 1 t and error M 2 t show two errors corresponding to time t that are generated according to models M 1 and M 2 respectively.In this case, M 2 denotes the neural network model, and M 1 is one of the five benchmark models that are being assessed.Specifically, the test statistic for the MDM test is expressed as MDM where T represents the time period for which prediction performance comparisons are performed; h shows the forecast horizon (in this case, h 5 1); d represents the sample average of d t ; AJEB for d t 's kth auto-covariance for k 5 1, . .., h À 1 and h ≥ 2. The equivalence of predicted prediction performance from two distinct models is the null hypothesis of the MDM test.The MDM test would come after the t-distribution with T À 1 degrees of freedom under the null hypothesis.
The following details apply to the five benchmark models that were previously discussed.The forecast for the following period is derived from the current period's observations in the no-change model.Compared to the neural network model's number of delays, the AR and AR-GARCH models use the same number of lags.The GARCH component of the AR-GARCH model has the form GARCH(1,1). Predictors from the neural network model are also used by the SVR and RT models.The RT model employs the classification analysis and regression tree (CART) approach (Breiman, 2017), to be more specific, and analysis is conducted using the linear e-insensitive SVR model.
Figure 5 shows the outcomes of the benchmark analysis conducted during the out-ofsample testing phase using the RRMSE.It is evident that the neural network model has superior accuracy because it produces the lowest RRMSE when compared to the five benchmark models.The neural network model is compared to each of the five benchmark models in the MDM test, and the p-value is less than 0.01; this suggests that out of the five benchmark models analyzed, the neural network model generates prediction performance that is statistically considerably better.

Policy implication
Historically, both market participants and regulatory bodies have placed a great deal of weight on forecasts of agricultural commodity prices.Estimating agricultural commodities' values might be crucial to reducing uncertainty and developing well-informed policies, as many commoditiesincluding the green grams under considerationhave strategic relevance for many nations and regions.In agricultural commodity markets, almost all market participants would need access to price forecast data in order to make well-informed judgments.Forecast information, for example, provides useful information about the price tactics commodity processors will use for future sales.Forecast findings can be used by trading partners to get the necessary information to determine the conditions of their contracts.Trading and investing professionals may be able to profit from the spot or futures market by using predictive data.Risk managers and policymakers contend that forecast data is essential for developing strategies for efficient risk management and policy formulation.
To the best of the authors' knowledge, most forecasting methods employed by the government and many market participants when it comes to agricultural commodity price indices for wholesale purposes are based on econometric models, especially time-series econometric models.In the interim, professional judgments are still used.This has a practical basis since econometric models and expert views are often easy to develop, use and maintain; many forecast customers have been using them for decades, and many of these models have the capacity to provide predictions with a decent degree of accuracy.It's widely acknowledged that machine learning models have potential and that further research into them is worthwhile, especially in light of the rising affordability of computer resources and the plausible foundation for probable nonlinear patterns in price series of a diverse variety of agricultural commodities.However, some decision-makers may still view these models as unduly complex forecasting tools, which could make it difficult for some market participants and policymakers to consider these models.Actually, in recent years, the capabilities of artificial intelligence and machine learning have attracted the attention of many governments as well as astute market actors.This study follows the current trend of looking at neural networks' potential to resolve forecasting problems for the wholesale green gram price index.This article explains the process of creating such a model, with excellent prediction accuracy and stabilities as a consequence.These imply that machine learning models are worth investigating, probably for a greater variety of agricultural products and other economic domains.

Conclusion
Projections of commodity prices are crucial to a wide spectrum of market players in the resource and agriculture sectors.This matter shouldn't be an exception given the important roles that green beans play in the Chinese market.In the current study, we undertake a forecast exercise based on the Chinese market using the weekly wholesale price index of green beans over a ten-year period, from January 1, 2010 to January 3, 2020, which is complex and has nonlinear elements.Since the nonlinear auto-regressive neural network has a great deal of potential for modeling different nonlinear patterns, we investigate it as the forecast approach for this purpose.We analyze the forecast performance related to different parameters about model estimation procedures, hidden neuron and delay counts, and data splitting ratios, respectively.Our inquiry led to the development of our very basic model that delivers forecasts with outstanding stabilities and accuracy.The particular basis of the model is provided by six delays and two hidden neurons.The Levenberg-Marquardt (Levenberg, 1944;Marquardt, 1963) method and a training-validation-testing ratio of 60%-20%-20% are used.With this selected model, we obtain relative root mean square errors of 4.34%, 4.71%, and 3.98%, respectively, for the training, validation and testing stages.The neural network model produces statistically considerably better accuracy when compared to other machine learning approaches and numerous conventional time-series econometric models, according to our benchmark research.We provide empirical evidence that neural networks are a useful tool for China's weekly green bean price predictions.It might be argued that our results are enough to serve as independent technical price forecasts.However, they may be combined with other (basic) forecast results to create views on pricing trends and carry out policy research.We use a simple, understandable framework that should be straightforward to implement.A lot of market participants could find this important (Brandt and Bessler, 1983), and the framework might be expanded to handle forecasting tasks involving different commodities in other economic sectors including the metal, energy and mining industries.Future research on the potential applications of graph theory to (non)linear time-series AJEB models might be intriguing (Kano and Shimizu, 2003;Shimizu et al., 2006Shimizu et al., , 2011;;Xu and Zhang, 2023f;Shimizu and Kano, 2008;Xu, 2014a;Bessler and Wang, 2012).Another worthwhile area of research might be the economic effects of estimating commodities prices using neural networks or other machine learning technology (Yang et al., 2008(Yang et al., , 2010;;Wang and Yang, 2010).

Figure 1 .
Figure 1.Visualization of the Chinese market's weekly price index for green beans over a tenyear span, from January 1, 2010 to January 3, 2020 Figure 3 displays the relevant findings together with the RRMSEs for training, validation and testing based on each parameter.By contrasting setups #49 and #5, the sensitivity to model training strategies is assessed.A comparison is made between configurations #49 and #41, #43, #45 and #47 in terms of sensitivity to the number of delays.The relationship between configurations #49 and #59, #69 and #79 is studied in terms of sensitivity to the number of hidden neurons.Between configurations #49 and #9 and #89, the sensitivity to the ratio to segment the price data is investigated.Based on the comparing results, setting #49 is

Figure 2 .
Figure 2. RRMSEs across model configurations for the weekly price index of wholesale green beans