Search results
1 – 10 of 203The paper aims to expand on the works well documented by Joy Boulamwini and Ruha Benjamin by expanding their critique to the African continent. The research aims to assess if…
Abstract
Purpose
The paper aims to expand on the works well documented by Joy Boulamwini and Ruha Benjamin by expanding their critique to the African continent. The research aims to assess if algorithmic biases are prevalent in DALL-E 2 and Starry AI. The aim is to help inform better artificial intelligence (AI) systems for future use.
Design/methodology/approach
The paper utilised a desktop study for literature and gathered data from Open AI’s DALL-E 2 text-to-image generator and StarryAI text-to-image generator.
Findings
The DALL-E 2 significantly underperformed when it was tasked with generating images of “An African Family” as opposed to images of a “Family”. The pictures lacked any conceivable detail as compared to the latter of this comparison. The StarryAI significantly outperformed the DALL-E 2 and rendered visible faces. However, the accuracy of the culture portrayed was poor.
Research limitations/implications
Because of the chosen research approach, the research results may lack generalisability. Therefore, researchers are encouraged to test the proposed propositions further. The implications, however, are that more inclusion is warranted to help address the issue of cultural inaccuracies noted in a few of the paper’s experiments.
Practical implications
The paper is useful for advocates who advocate for algorithmic equality and fairness by highlighting evidence of the implications of systemic-induced algorithmic bias.
Social implications
The reduction in offensive racism and more socially appropriate AI can be a better product for commercialisation and general use. If AI is trained on diversity, it can lead to better applications in contemporary society.
Originality/value
The paper’s use of DALL-E 2 and Starry AI is an under-researched area, and future studies on this matter are welcome.
Details
Keywords
Mohammad Rahiminia, Jafar Razmi, Sareh Shahrabi Farahani and Ali Sabbaghnia
Supplier segmentation provides companies with suitable policies to control each segment, thereby saving time and resources. Sustainability has become a mandatory requirement in…
Abstract
Purpose
Supplier segmentation provides companies with suitable policies to control each segment, thereby saving time and resources. Sustainability has become a mandatory requirement in competitive business environments. This study aims to develop a clustering-based approach to sustainable supplier segmentation.
Design/methodology/approach
The characteristics of the suppliers and the aspects of the purchased items were considered simultaneously. The weights of the sub-criteria were determined using the best-worst method. Then, the K-means clustering algorithm was applied to all company suppliers based on four criteria. The proposed model is applied to a real case study to test the performance of the proposed approach.
Findings
The results prove that supplier segmentation is more efficient when using clustering algorithms, and the best criteria are selected for sustainable supplier segmentation and managing supplier relationships.
Originality/value
This study integrates sustainability considerations into the supplier segmentation problem using a hybrid approach. The proposed sustainable supplier segmentation is a practical tool that eliminates complexity and presents the possibility of convenient execution. The proposed method helps business owners to elevate their sustainable insights.
Details
Keywords
Pamela Danese, Riccardo Mocellin and Pietro Romano
The purpose of this paper is to contribute to the debate on blockchain (BC) adoption for preventing counterfeiting by investigating BC systems where different options for BC…
Abstract
Purpose
The purpose of this paper is to contribute to the debate on blockchain (BC) adoption for preventing counterfeiting by investigating BC systems where different options for BC feeding and reading complement the use of BC technology. By grounding on the situational crime prevention, this study analyses how BC systems can be designed to effectively prevent counterfeiting.
Design/methodology/approach
This is a multiple-case study of five Italian wine companies using BC to prevent counterfeiting.
Findings
This study finds that the desired level of upstream/downstream counterfeiting protection that a brand owner intends to guarantee to customers through BC is the key driver to consider in the design of BC systems. The study identifies which variables are relevant to the design of feeding and reading processes and explains how such variables can be modulated in accordance with the desired level of counterfeiting protection.
Research limitations/implications
The cases investigated are Italian companies within the wine sector, and the BC projects analysed are in the pilot phase.
Practical implications
The study provides practical suggestions to address the design of BC systems by identifying a set of key variables and explaining how to properly modulate them to face upstream/downstream counterfeiting.
Originality/value
This research applies a new perspective based on the situational crime prevention approach in studying how companies can design BC systems to effectively prevent counterfeiting. It explains how feeding and reading process options can be configured in BC systems to assure different degrees of counterfeiting protection.
Details
Keywords
Michał Ciałkowski, Aleksander Olejnik, Magda Joachimiak, Krzysztof Grysa and Andrzej Frąckowiak
To reduce the heat load of a gas turbine blade, its surface is covered with an outer layer of ceramics with high thermal resistance. The purpose of this paper is the selection of…
Abstract
Purpose
To reduce the heat load of a gas turbine blade, its surface is covered with an outer layer of ceramics with high thermal resistance. The purpose of this paper is the selection of ceramics with such a low heat conduction coefficient and thickness, so that the permissible metal temperature is not exceeded on the metal-ceramics interface due to the loss ofmechanical properties.
Design/methodology/approach
Therefore, for given temperature changes over time on the metal-ceramics interface, temperature changes over time on the inner side of the blade and the assumed initial temperature, the temperature change over time on the outer surface of the ceramics should be determined. The problem presented in this way is a Cauchy type problem. When analyzing the problem, it is taken into account that thermophysical properties of metal and ceramics may depend on temperature. Due to the thin layer of ceramics in relation to the wall thickness, the problem is considered in the area in the flat layer. Thus, a one-dimensional non-stationary heat flow is considered.
Findings
The range of stability of the Cauchy problem as a function of time step, thickness of ceramics and thermophysical properties of metal and ceramics are examined. The numerical computations also involved the influence of disturbances in the temperature on metal-ceramics interface on the solution to the inverse problem.
Practical implications
The computational model can be used to analyze the heat flow in gas turbine blades with thermal barrier.
Originality/value
A number of inverse problems of the type considered in the paper are presented in the literature. Inverse problems, especially those Cauchy-type, are ill-conditioned numerically, which means that a small change in the inputs may result in significant errors of the solution. In such a case, regularization of the inverse problem is needed. However, the Cauchy problem presented in the paper does not require regularization.
Details
Keywords
Patrik Jonsson, Johan Öhlin, Hafez Shurrab, Johan Bystedt, Azam Sheikh Muhammad and Vilhelm Verendel
This study aims to explore and empirically test variables influencing material delivery schedule inaccuracies?
Abstract
Purpose
This study aims to explore and empirically test variables influencing material delivery schedule inaccuracies?
Design/methodology/approach
A mixed-method case approach is applied. Explanatory variables are identified from the literature and explored in a qualitative analysis at an automotive original equipment manufacturer. Using logistic regression and random forest classification models, quantitative data (historical schedule transactions and internal data) enables the testing of the predictive difference of variables under various planning horizons and inaccuracy levels.
Findings
The effects on delivery schedule inaccuracies are contingent on a decoupling point, and a variable may have a combined amplifying (complexity generating) and stabilizing (complexity absorbing) moderating effect. Product complexity variables are significant regardless of the time horizon, and the item’s order life cycle is a significant variable with predictive differences that vary. Decoupling management is identified as a mechanism for generating complexity absorption capabilities contributing to delivery schedule accuracy.
Practical implications
The findings provide guidelines for exploring and finding patterns in specific variables to improve material delivery schedule inaccuracies and input into predictive forecasting models.
Originality/value
The findings contribute to explaining material delivery schedule variations, identifying potential root causes and moderators, empirically testing and validating effects and conceptualizing features that cause and moderate inaccuracies in relation to decoupling management and complexity theory literature?
Details
Keywords
Zaminor Zamzamir@Zamzamin, Razali Haron and Anwar Hasan Abdullah Othman
This study investigates the impact of derivatives as risk management strategy on the value of Malaysian firms. This study also examines the interaction effect between derivatives…
Abstract
Purpose
This study investigates the impact of derivatives as risk management strategy on the value of Malaysian firms. This study also examines the interaction effect between derivatives and managerial ownership on firm value.
Design/methodology/approach
The study examines 200 nonfinancial firms engaged in derivatives for the period 2012–2017 using the generalized method of moments (GMM) to establish the influence of derivatives and managerial ownership on firm value. The study refers to two related theories (hedging theory and managerial aversion theory) to explain its findings. Firm value is measured using Tobin's Q with return on assets (ROA) and return on equity (ROE) as robustness checks.
Findings
The study found evidence on the positive influence of derivatives on firm value as proposed by the hedging theory. However, the study concludes that managers less hedge when they owned more shares based on the negative interaction between derivatives and managerial ownership on firm value. Hedging decision among managers in Malaysian firms therefore does not subscribe to the managerial aversion theory.
Research limitations/implications
This study focuses on the derivatives (foreign currency derivatives, interest rate derivatives and commodity derivatives) and managerial ownership that is deemed relevant and important to the Malaysian firms. Other forms of ownership such as state-/foreign owned and institutional ownership are not covered in this study.
Practical implications
This study has important implications to managers and investors. First is on the importance of risk management using derivatives to increase firm value, second, the influence of derivatives and managerial ownership on firm value and finally, the quality reporting on derivatives exposure by firms in line with the required accounting standard.
Originality/value
There is limited empirical evidence on the impact of derivatives on firm value as well as the influence of managerial ownership on hedging decisions of Malaysian firms. This study analyzes the influence of derivatives on firm value during the period in which reporting on derivatives in financial reports is made mandatory by the Malaysian regulator, hence avoiding data inaccuracy unlike the previous studies on Malaysia. This study therefore fills the gap in the literature in relation to the risk management strategies using derivatives in Malaysia.
Details
Keywords
Oliver Hutt, Kate Bowers, Shane Johnson and Toby Davies
The purpose of this paper is to use an evaluation of a micro-place-based hot-spot policing implementation to highlight the potential issues raised by data quality standards in the…
Abstract
Purpose
The purpose of this paper is to use an evaluation of a micro-place-based hot-spot policing implementation to highlight the potential issues raised by data quality standards in the recording and measurement of crime data and police officer movements.
Design/methodology/approach
The study focusses on an area of London (UK) which used a predictive algorithm to designate micro-place patrol zones for each police shift over a two-month period. Police officer movements are measured using GPS data from officer-worn radios. Descriptive statistics regarding the crime data commonly used to evaluate this type of implementation are presented, and simple analyses are presented to examine the effects of officer patrol duration (dosage) on crime in micro-place hot-spots.
Findings
The results suggest that patrols of 10-20 minutes in a given police shift have a significant impact on reducing crime; however, patrols of less than about 10 minutes and more than about 20 minutes are ineffective at deterring crime.
Research limitations/implications
Due to the sparseness of officer GPS data, their paths have to be interpolated which could introduce error to the estimated patrol dosages. Similarly, errors and uncertainty in recorded crime data could have substantial impact on the designation of micro-place interventions and evaluations of their effectiveness.
Originality/value
This study is one of the first to use officer GPS data to estimate patrol dosage and places particular emphasis on the issue of data quality when evaluating micro-place interventions.
Details
Keywords
With the outset of automatic detection of information, misinformation, and disinformation, the purpose of this paper is to examine and discuss various conceptions of information…
Abstract
Purpose
With the outset of automatic detection of information, misinformation, and disinformation, the purpose of this paper is to examine and discuss various conceptions of information, misinformation, and disinformation within philosophy of information.
Design/methodology/approach
The examinations are conducted within a Gricean framework in order to account for the communicative aspects of information, misinformation, and disinformation as well as the detection enterprise.
Findings
While there often is an exclusive focus on truth and falsity as that which distinguish information from misinformation and disinformation, this paper finds that the distinguishing features are actually intention/intentionality and non-misleadingness/misleadingness – with non-misleadingness/misleadingness as the primary feature. Further, the paper rehearses the argument in favor of a true variety of disinformation and extends this argument to include true misinformation.
Originality/value
The findings are novel and pose a challenge to the possibility of automatic detection of misinformation and disinformation. Especially the notions of true disinformation and true misinformation, as varieties of disinformation and misinformation, which force the true/false dichotomy for information vs mis-/disinformation to collapse.
Details
Keywords
Tadhg O’Mahony, Jyrki Luukkanen, Jarmo Vehmas and Jari Roy Lee Kaivo-oja
The literature on economic forecasting, is showing an increase in criticism, of the inaccuracy of forecasts, with major implications for economic, and fiscal policymaking…
Abstract
Purpose
The literature on economic forecasting, is showing an increase in criticism, of the inaccuracy of forecasts, with major implications for economic, and fiscal policymaking. Forecasts are subject to the systemic uncertainty of human systems, considerable event-driven uncertainty, and show biases towards optimistic growth paths. The purpose of this study is to consider approaches to improve economic foresight.
Design/methodology/approach
This study describes the practice of economic foresight as evolving in two separate, non-overlapping branches, short-term economic forecasting, and long-term scenario analysis of development, the latter found in studies of climate change and sustainability. The unique case of Ireland is considered, a country that has experienced both steep growth and deep troughs, with uncertainty that has confounded forecasting. The challenges facing forecasts are discussed, with brief review of the drivers of growth, and of long-term economic scenarios in the global literature.
Findings
Economic forecasting seeks to manage uncertainty by improving the accuracy of quantitative point forecasts, and related models. Yet, systematic forecast failures remain, and the economy defies prediction, even in the near-term. In contrast, long-term scenario analysis eschews forecasts in favour of a set of plausible or possible alternative scenarios. Using alternative scenarios is a response to the irreducible uncertainty of complex systems, with sophisticated approaches employed to integrate qualitative and quantitative insights.
Research limitations/implications
To support economic and fiscal policymaking, it is necessary support advancement in approaches to economic foresight, to improve handling of uncertainty and related risk.
Practical implications
While European Union Regulation (EC) 1466/97 mandates pursuit of improved accuracy, in short-term economic forecasts, there is now a case for implementing advanced foresight approaches, for improved analysis, and more robust decision-making.
Social implications
Building economic resilience and adaptability, as part of a sustainable future, requires both long-term strategic planning, and short-term policy. A 21st century policymaking process can be better supported by analysis of alternative scenarios.
Originality/value
To the best of the authors’ knowledge, the article is original in considering the application of scenario foresight approaches, in economic forecasting. The study has value in improving the baseline forecast methods, that are fundamental to contemporary economics, and in bringing the field of economics into the heart of foresight.
Details
Keywords
Jingfeng Xie, Jun Huang, Lei Song, Jingcheng Fu and Xiaoqiang Lu
The typical approach of modeling the aerodynamics of an aircraft is to develop a complete database through testing or computational fluid dynamics (CFD). The database will be huge…
Abstract
Purpose
The typical approach of modeling the aerodynamics of an aircraft is to develop a complete database through testing or computational fluid dynamics (CFD). The database will be huge if it has a reasonable resolution and requires an unacceptable CFD effort during the conceptional design. Therefore, this paper aims to reduce the computing effort required via establishing a general aerodynamic model that needs minor parameters.
Design/methodology/approach
The model structure was a preconfigured polynomial model, and the parameters were estimated with a recursive method to further reduce the calculation effort. To uniformly disperse the sample points through each step, a unique recursive sampling method based on a Voronoi diagram was presented. In addition, a multivariate orthogonal function approach was used.
Findings
A case study of a flying wing aircraft demonstrated that generating a model with acceptable precision (0.01 absolute error or 5% relative error) costs only 1/54 of the cost of creating a database. A series of six degrees of freedom flight simulations shows that the model’s prediction was accurate.
Originality/value
This method proposed a new way to simplify the model and recursive sampling. It is a low-cost way of obtaining high-fidelity models during primary design, allowing for more precise flight dynamics analysis.
Details