Search results
1 – 10 of 786Algorithmic and computational thinking are necessary skills for designers in an increasingly digital world. Parametric design, a method to construct designs based on algorithmic…
Abstract
Purpose
Algorithmic and computational thinking are necessary skills for designers in an increasingly digital world. Parametric design, a method to construct designs based on algorithmic logic and rules, has become widely used in architecture practice and incorporated in the curricula of architecture schools. However, there are few studies proposing strategies for teaching parametric design into architecture students, tackling software literacy while promoting the development of algorithmic thinking.
Design/methodology/approach
A descriptive study and a prescriptive study are conducted. The descriptive study reviews the literature on parametric design education. The prescriptive study is centered on proposing the incomplete recipe as instructional material and a new approach to teaching parametric design.
Findings
The literature on parametric design education has mostly focused on curricular discussions, descriptions of case studies or studio-long approaches; day-to-day instructional methods, however, are rarely discussed. A pedagogical strategy to teach parametric design is introduced: the incomplete recipe. The instructional method proposed provides students with incomplete recipes for parametric scripts that are increasingly pared down as the students become expert users.
Originality/value
The article contributes to the existing literature by proposing the incomplete recipe as a strategy for teaching parametric design. The recipe as a pedagogical tool provides a means for both software skill acquisition and the development of algorithmic thinking.
Details
Keywords
G. Deepa, A.J. Niranjana and A.S. Balu
This study aims at proposing a hybrid model for early cost prediction of a construction project. Early cost prediction for a construction project is the basic approach to procure…
Abstract
Purpose
This study aims at proposing a hybrid model for early cost prediction of a construction project. Early cost prediction for a construction project is the basic approach to procure a project within a predefined budget. However, most of the projects routinely face the impact of cost overruns. Furthermore, conventional and manual cost computing techniques are hectic, time-consuming and error-prone. To deal with such challenges, soft computing techniques such as artificial neural networks (ANNs), fuzzy logic and genetic algorithms are applied in construction management. Each technique has its own constraints not only in terms of efficiency but also in terms of feasibility, practicability, reliability and environmental impacts. However, appropriate combination of the techniques improves the model owing to their inherent nature.
Design/methodology/approach
This paper proposes a hybrid model by combining machine learning (ML) techniques with ANN to accurately predict the cost of pile foundations. The parameters contributing toward the cost of pile foundations were collected from five different projects in India. Out of 180 collected data entries, 176 entries were finally used after data cleaning. About 70% of the final data were used for building the model and the remaining 30% were used for validation.
Findings
The proposed model is capable of predicting the pile foundation costs with an accuracy of 97.42%.
Originality/value
Although various cost estimation techniques are available, appropriate use and combination of various ML techniques aid in improving the prediction accuracy. The proposed model will be a value addition to cost estimation of pile foundations.
Details
Keywords
Research shows that postsecondary students are largely unaware of the impact of algorithms on their everyday lives. Also, most noncomputer science students are not being taught…
Abstract
Purpose
Research shows that postsecondary students are largely unaware of the impact of algorithms on their everyday lives. Also, most noncomputer science students are not being taught about algorithms as part of the regular curriculum. This exploratory, qualitative study aims to explore subject-matter experts’ insights and perceptions of the knowledge components, coping behaviors and pedagogical considerations to aid faculty in teaching algorithmic literacy to postsecondary students.
Design/methodology/approach
Eleven semistructured interviews and one focus group were conducted with scholars and teachers of critical algorithm studies and related fields. A content analysis was manually performed on the transcripts using a mixture of deductive and inductive coding. Data analysis was aided by the coding software program Dedoose (2021) to determine frequency totals for occurrences of a code across all participants along with how many times specific participants mentioned a code. Then, findings were organized around the three themes of knowledge components, coping behaviors and pedagogy.
Findings
The findings suggested a set of 10 knowledge components that would contribute to students’ algorithmic literacy along with seven behaviors that students could use to help them better cope with algorithmic systems. A set of five teaching strategies also surfaced to help improve students’ algorithmic literacy.
Originality/value
This study contributes to improved pedagogy surrounding algorithmic literacy and validates existing multi-faceted conceptualizations and measurements of algorithmic literacy.
Details
Keywords
Matthew Philip Masterton, David Malcolm Downing, Bill Lozanovski, Rance Brennan B. Tino, Milan Brandt, Kate Fox and Martin Leary
This paper aims to present a methodology for the detection and categorisation of metal powder particles that are partially attached to additively manufactured lattice structures…
Abstract
Purpose
This paper aims to present a methodology for the detection and categorisation of metal powder particles that are partially attached to additively manufactured lattice structures. It proposes a software algorithm to process micro computed tomography (µCT) image data, thereby providing a systematic and formal basis for the design and certification of powder bed fusion lattice structures, as is required for the certification of medical implants.
Design/methodology/approach
This paper details the design and development of a software algorithm for the analysis of µCT image data. The algorithm was designed to allow statistical probability of results based on key independent variables. Three data sets with a single unique parameter were input through the algorithm to allow for characterisation and analysis of like data sets.
Findings
This paper demonstrates the application of the proposed algorithm with three data sets, presenting a detailed visual rendering derived from the input image data, with the partially attached particles highlighted. Histograms for various geometric attributes are output, and a continuous trend between the three different data sets is highlighted based on the single unique parameter.
Originality/value
This paper presents a novel methodology for non-destructive algorithmic detection and categorisation of partially attached metal powder particles, of which no formal methods exist. This material is available to download as a part of a provided GitHub repository.
Details
Keywords
Gennaro Maione, Corrado Cuccurullo and Aurelio Tommasetti
The study aims to shed light on the historical and contemporary trends of biodiversity accounting literature, while simultaneously offering insights into the future of research in…
Abstract
Purpose
The study aims to shed light on the historical and contemporary trends of biodiversity accounting literature, while simultaneously offering insights into the future of research in this sector. The paper also aims to raise awareness among accounting researchers about their role in preserving biodiversity and informing improvements in policy and practice in this area.
Design/methodology/approach
The Bibliometrix R-package is used to carry out an algorithmic historiography. The reference publication year spectroscopy (RPYS) methodology is implemented. It is a unique approach to bibliometric analysis that allows researchers to identify and examine historical patterns in scientific literature.
Findings
The work provides a distinct and comprehensive discussion of the four distinct periods demarcating the progression of scientific discourse regarding biodiversity accounting. These periods are identified as Origins (1767–1864), Awareness (1865–1961), Consolidation (1962–1995) and Acceleration (1996–2021). The study offers an insightful analysis of the main thematic advancements, interpretative paradigm shifts and theoretical developments that occurred during these periods.
Research limitations/implications
The paper offers a significant contribution to the existing academic debate on the prospects for accounting scholars to concentrate their research efforts on biodiversity and thereby promote advancements in policy and practice in this sector.
Originality/value
The article represents the first example of using an algorithmic historiography approach to examine the corpus of literature dealing with biodiversity accounting. The value of this study comes from the fusion of historical methodology and perspective. To the best of the authors’ knowledge, this is also the first scientific investigation applying RPYS in the accounting sector.
Details
Keywords
Mu Shengdong, Liu Yunjie and Gu Jijian
By introducing Stacking algorithm to solve the underfitting problem caused by insufficient data in traditional machine learning, this paper provides a new solution to the cold…
Abstract
Purpose
By introducing Stacking algorithm to solve the underfitting problem caused by insufficient data in traditional machine learning, this paper provides a new solution to the cold start problem of entrepreneurial borrowing risk control.
Design/methodology/approach
The authors introduce semi-supervised learning and integrated learning into the field of migration learning, and innovatively propose the Stacking model migration learning, which can independently train models on entrepreneurial borrowing credit data, and then use the migration strategy itself as the learning object, and use the Stacking algorithm to combine the prediction results of the source domain model and the target domain model.
Findings
The effectiveness of the two migration learning models is evaluated with real data from an entrepreneurial borrowing. The algorithmic performance of the Stacking-based model migration learning is further improved compared to the benchmark model without migration learning techniques, with the model area under curve value rising to 0.8. Comparing the two migration learning models reveals that the model-based migration learning approach performs better. The reason for this is that the sample-based migration learning approach only eliminates the noisy samples that are relatively less similar to the entrepreneurial borrowing data. However, the calculation of similarity and the weighing of similarity are subjective, and there is no unified judgment standard and operation method, so there is no guarantee that the retained traditional credit samples have the same sample distribution and feature structure as the entrepreneurial borrowing data.
Practical implications
From a practical standpoint, on the one hand, it provides a new solution to the cold start problem of entrepreneurial borrowing risk control. The small number of labeled high-quality samples cannot support the learning and deployment of big data risk control models, which is the cold start problem of the entrepreneurial borrowing risk control system. By extending the training sample set with auxiliary domain data through suitable migration learning methods, the prediction performance of the model can be improved to a certain extent and more generalized laws can be learned.
Originality/value
This paper introduces the thought method of migration learning to the entrepreneurial borrowing scenario, provides a new solution to the cold start problem of the entrepreneurial borrowing risk control system and verifies the feasibility and effectiveness of the migration learning method applied in the risk control field through empirical data.
Details
Keywords
Frames play an important role in determining the geometric properties of the curves such as curvature and torsion. In particular, the determination of the point types of the…
Abstract
Purpose
Frames play an important role in determining the geometric properties of the curves such as curvature and torsion. In particular, the determination of the point types of the curve, convexity or concavity is also possible with the frames. The Serret-Frenet frames are generally used in curve theory. However, the Serret-Frenet frame does not work when the second derivative is zero. In order to eliminate this problem, the quasi-frame was obtained. In this study, the quasi frames of the polynomial and rational Bezier curves are calculated by an algorithmic method. Thus, it will be possible to construct the frame even at singular points due to the second derivative of the curve. In this respect, the contribution of this study to computer-aided geometric design studies is quite high.
Design/methodology/approach
In this study, the quasi frame which is an alternative for all intermediate points of the rational Bezier curves was generated by the algorithm method, and some variants of this frame were analyzed. Even at the points where the second derivative of such rational Bezier curves is zero, there is a curve frame.
Findings
Several examples presented at the end of the paper regarding the quasi-frame of the rational Bezier curve, polynomial Bezier curve, linear, quadratic and cubic Bezier curves emphasize the efficacy and preciseness.
Originality/value
The quasi-frame of a rational Bezier curve is first computed. Owing to the quasi frame, it will have been found a solution for the nonsense rotation of the curve around the tangent.
Details
Keywords
Mélissa Fortin, Erica Pimentel and Emilio Boulianne
This study explores how introducing a permissioned blockchain in a supply chain context impacts accountability relationships and the process of rendering an account. The authors…
Abstract
Purpose
This study explores how introducing a permissioned blockchain in a supply chain context impacts accountability relationships and the process of rendering an account. The authors explore how implementing a digital transformation impacts the governance of network transactions.
Design/methodology/approach
The authors mobilize 28 interviews and documentary analysis. The authors focus on early blockchain adopters to get an insight into how implementing a permissioned blockchain can transform information sharing, coordination and collaboration between business partners, now converted into network participants.
Findings
The authors suggest that implementing a permissioned blockchain impacts accountability across three levers, namely through the ledger, through the code and through the people, where these levers are interconnected. Blockchains are often valued for their ability to enable transparency through the visibility of transactions, but the authors argue that this is an incomplete view. Rather, transparency alone does not help to satisfy a duty of accountability, as it can result in selective disclosure or obfuscation.
Originality/value
The authors extend the conceptualizations of accountability in the blockchain literature by focusing on how accountability relationships are enacted, and accounts are rendered in a permissioned blockchain context. Additionally, the authors complement existing work on accountability and governance by suggesting an integrated model across three dimensions: ledger, code and people.
Details
Keywords
Ranjit Roy Ghatak and Jose Arturo Garza-Reyes
The research explores the shift to Quality 4.0, examining the move towards a data-focussed transformation within organizational frameworks. This transition is characterized by…
Abstract
Purpose
The research explores the shift to Quality 4.0, examining the move towards a data-focussed transformation within organizational frameworks. This transition is characterized by incorporating Industry 4.0 technological innovations into existing quality management frameworks, signifying a significant evolution in quality control systems. Despite the evident advantages, the practical deployment in the Indian manufacturing sector encounters various obstacles. This research is dedicated to a thorough examination of these impediments. It is structured around a set of pivotal research questions: First, it seeks to identify the key barriers that impede the adoption of Quality 4.0. Second, it aims to elucidate these barriers' interrelations and mutual dependencies. Thirdly, the research prioritizes these barriers in terms of their significance to the adoption process. Finally, it contemplates the ramifications of these priorities for the strategic advancement of manufacturing practices and the development of informed policies. By answering these questions, the research provides a detailed understanding of the challenges faced. It offers actionable insights for practitioners and policymakers implementing Quality 4.0 in the Indian manufacturing sector.
Design/methodology/approach
Employing Interpretive Structural Modelling and Matrix Impact of Cross Multiplication Applied to Classification, the authors probe the interdependencies amongst fourteen identified barriers inhibiting Quality 4.0 adoption. These barriers were categorized according to their driving power and dependence, providing a richer understanding of the dynamic obstacles within the Technology–Organization–Environment (TOE) framework.
Findings
The study results highlight the lack of Quality 4.0 standards and Big Data Analytics (BDA) tools as fundamental obstacles to integrating Quality 4.0 within the Indian manufacturing sector. Additionally, the study results contravene dominant academic narratives, suggesting that the cumulative impact of organizational barriers is marginal, contrary to theoretical postulations emphasizing their central significance in Quality 4.0 assimilation.
Practical implications
This research provides concrete strategies, such as developing a collaborative platform for sharing best practices in Quality 4.0 standards, which fosters a synergistic relationship between organizations and policymakers, for instance, by creating a joint task force, comprised of industry leaders and regulatory bodies, dedicated to formulating and disseminating comprehensive guidelines for Quality 4.0 adoption. This initiative could lead to establishing industry-wide standards, benefiting from the pooled expertise of diverse stakeholders. Additionally, the study underscores the necessity for robust, standardized Big Data Analytics tools specifically designed to meet the Quality 4.0 criteria, which can be developed through public-private partnerships. These tools would facilitate the seamless integration of Quality 4.0 processes, demonstrating a direct route for overcoming the barriers of inadequate standards.
Originality/value
This research delineates specific obstacles to Quality 4.0 adoption by applying the TOE framework, detailing how these barriers interact with and influence each other, particularly highlighting the previously overlooked environmental factors. The analysis reveals a critical interdependence between “lack of standards for Quality 4.0” and “lack of standardized BDA tools and solutions,” providing nuanced insights into their conjoined effect on stalling progress in this field. Moreover, the study contributes to the theoretical body of knowledge by mapping out these novel impediments, offering a more comprehensive understanding of the challenges faced in adopting Quality 4.0.
Details