Search results
1 – 10 of 20Luca Gabriele De Vivo Nicoloso, Joshua Pelz, Herb Barrack and Falko Kuester
There are over 40 million amputees globally with more than 185,000 Americans losing their limbs every year. For most of the world, prosthetic devices remain too expensive and…
Abstract
Purpose
There are over 40 million amputees globally with more than 185,000 Americans losing their limbs every year. For most of the world, prosthetic devices remain too expensive and uncomfortable. This paper aims to outline advancements made by a multidisciplinary research group, interested in advancing the restoration of human motion through accessible lower limb prostheses.
Design/methodology/approach
Customization, comfort and functionality are the most important metrics reported by prosthetists and patients. The work of this paper presents the design and manufacturing of a custom made, cost-effective and functional three-dimensional (3D) printed transtibial prosthesis monocoque design. The design of the prosthesis integrates 3D imaging, modelling and optimization techniques coupled with additive manufacturing.
Findings
The successful fabrication of a functional monocoque prosthesis through 3D printing indicates the workflow may be a solution to the worldwide accessibility crisis. The digital workflow developed in this work offers great potential for providing prosthetic devices to rural communities, which lack access to skilled prosthetic physicians. The authors found that using the workflow together with 3D printing, this study can create custom monocoque prostheses (Figure 16). These prostheses are comfortable, functional and properly aligned. In comparison with traditional prosthetic devices, the authors slowered the average cost, weight and time of production by 95%, 55% and 95%, respectively.
Social implications
This novel digital design and manufacturing workflow has the potential to democratize and globally proliferate access to prosthetic devices, which restore the patient’s mobility, quality of life and health. LIMBER’s toolbox can reach places where proper prosthetic and orthotic care is not available. The digital workflow reduces the cost of making custom devices by an order of magnitude, enabling broader reach, faster access and improved comfort. This is particularly important for children who grow quickly and need new devices every few months or years, timely access is both physically and psychologically important.
Originality/value
In this manuscript, the authors show the application of digital design techniques for fabricating prosthetic devices. The proposed workflow implements several advantageous changes and, most importantly, digitally blends the three components of a transtibial prosthesis into a single, 3D printable monocoque device. The development of a novel unibody transtibial device that is properly aligned and adjusted digitally, greatly reduces the number of visits an amputee must make to a clinic to have a certified prosthetist adjust and modify their prosthesis. The authors believe this novel workflow has the potential to ease the worldwide accessibility crisis for prostheses.
Details
Keywords
Kateryna Kravchenko, Tim Gruchmann, Marina Ivanova and Dmitry Ivanov
The ripple effect (i.e. disruption propagation in networks) belongs to one of the central pillars in supply chain resilience and viability research, constituting a type of…
Abstract
Purpose
The ripple effect (i.e. disruption propagation in networks) belongs to one of the central pillars in supply chain resilience and viability research, constituting a type of systemic disruption. A considerable body of knowledge has been developed for the last two decades to examine the ripple effect triggered by instantaneous disruptions, e.g. earthquakes or factory fires. In contrast, far less research has been devoted to study the ripple effect under long-term disruptions, such as in the wake of the COVID-19 pandemic.
Design/methodology/approach
This study qualitatively analyses secondary data on the ripple effects incurred in automotive and electronics supply chains. Through the analysis of five distinct case studies illustrating operational practices used by companies to cope with the ripple effect, we uncover a disruption propagation mechanism through the supply chains during the semiconductor shortage in 2020–2022.
Findings
Applying a theory elaboration approach, we sequence the triggers for the ripple effects induced by the semiconductor shortage. Second, the measures to mitigate the ripple effect employed by automotive and electronics companies are delineated with a cost-effectiveness analysis. Finally, the results are summarised and generalised into a causal loop diagram providing a more complete conceptualisation of long-term disruption propagation.
Originality/value
The results add to the academic discourse on appropriate mitigation strategies. They can help build scenarios for simulation and analytical models to inform decision-making as well as incorporate systemic risks from ripple effects into a normal operations mode. In addition, the findings provide practical recommendations for implementing short- and long-term measures during long-term disruptions.
Details
Keywords
The need to digitise is an awareness that is shared across our community globally, and yet the probability of the intersection between resources, expertise and institutions are…
Abstract
Purpose
The need to digitise is an awareness that is shared across our community globally, and yet the probability of the intersection between resources, expertise and institutions are not as prospective. A strategic view towards the long-term goal of cultivating and digitally upskilling the younger generation, building a community and creating awareness with digital activities that can be beneficial for cultural heritage is necessary.
Design/methodology/approach
The work involves distributing tasks between stakeholders and local volunteers. It uses close-range photogrammetry for reconstructing the entire heritage site in 3D, and outlines achievable digitisation activities in the crowdsourced, close-range photogrammetry of a 19th century Cheah Kongsi clan temple located in George Town, a UNESCO World Heritage Site in Penang, Malaysia.
Findings
The research explores whether loosely distributing photogrammetry work that partially simulates an unorganised crowdsourcing activity can generate complete models of a site that meets the criteria set by the needs of the clan temple. The data acquired were able to provide a complete visual record of the site, but the 3D models that was generated through the distributed task revealed gaps that needed further measurements.
Practical implications
Key lessons learned in this activity is transferable. Furthermore, the involvement of volunteers can also raise awareness of ownership, identity and care for local cultural heritage.
Social implications
Key lessons learned in this activity is transferable. Furthermore, the involvement of volunteers can also raise awareness of identity, ownership, cultural understanding, and care for local cultural heritage.
Originality/value
The value of semi-formal activities indicated that set goals can be achieved through crowdsourcing and that the new generation can be taught both to care for their heritage, and that the transfer of digital skills is made possible through such activities. The mass crowdsourcing activity is the first of its kind that attempts to completely digitise a cultural heritage site in 3D via distributed activities.
Details
Keywords
Vincenzo Varriale, Antonello Cammarano, Francesca Michelino and Mauro Caputo
The purpose of this study is to identify and characterize the role of both original equipment manufacturer (OEM) and module supplier (MS) knowledge in the smartphone industry. In…
Abstract
Purpose
The purpose of this study is to identify and characterize the role of both original equipment manufacturer (OEM) and module supplier (MS) knowledge in the smartphone industry. In particular, this study aims to evaluate which of the two actors possesses the knowledge that has the greatest impact on the market satisfaction.
Design/methodology/approach
This study explores and combines the concepts of modularity and knowledge management by investigating the patent portfolio of 16 leading smartphone OEMs and 144 MSs. The applied methodology is based on the content analysis of patent data to extract information on both OEM’s and MS’s component knowledge.
Findings
The results show that, although its components are purchased from external MSs, the OEM should preserve both a general and specific concentration of component knowledge, as well as on the end product, to achieve a greater market satisfaction. Moreover, a positive direct relationship was found for the MS between the general concentration of component knowledge and the market satisfaction.
Originality/value
The novelty of this study is to segment the knowledge of both the OEM and the MS on multiple levels. To the best of the authors’ knowledge, this is one of the first studies that investigates the end product and component knowledge of both actors by filtering patent data using text-mining techniques. The originality of this work is to intercept the relationship between the different shades of knowledge of each actor and the market satisfaction.
Details
Keywords
Fatma Betül Yeni, Beren Gürsoy Yılmaz, Behice Meltem Kayhan, Gökhan Özçelik and Ömer Faruk Yılmaz
This study aims to address challenges related to long lead time within a hazelnut company, primarily attributed to product quality issues. The purpose is to propose an integrated…
Abstract
Purpose
This study aims to address challenges related to long lead time within a hazelnut company, primarily attributed to product quality issues. The purpose is to propose an integrated lean-based methodology incorporating a continuous improvement cycle, drawing on Lean Six Sigma (LSS) and Industry 4.0 applications.
Design/methodology/approach
The research adopts a systematic approach, commencing with a current state analysis using VSM and fishbone analysis to identify underlying problems causing long lead time. A Pareto analysis categorizes these problems, distinguishing between supplier-related issues and deficiencies in lean applications. Lean tools are initially implemented, followed by a future state VSM. Supplier-related issues are then addressed, employing root cause analyses and Industry 4.0-based countermeasures, including a proposed supplier selection model.
Findings
The study reveals that, despite initial lean implementations, lead times remain high. Addressing supplier-related issues, particularly through the proposed supplier selection model, significantly reduces the number of suppliers and contributes to lead time reduction. Industry 4.0-based countermeasures ensure traceability and strengthen supplier relationships.
Originality/value
This research introduces a comprehensive LSS methodology, practically demonstrating the application of various tools and providing managerial insights for practitioners and policymakers. The study contributes theoretically by addressing challenges comprehensively, practically by showcasing tool applications and managerially by offering guidance for system performance enhancement.
Details
Keywords
Heru Agus Santoso, Brylian Fandhi Safsalta, Nanang Febrianto, Galuh Wilujeng Saraswati and Su-Cheng Haw
Plant cultivation holds a pivotal role in agriculture, necessitating precise disease identification for the overall health of plants. This research conducts a comprehensive…
Abstract
Purpose
Plant cultivation holds a pivotal role in agriculture, necessitating precise disease identification for the overall health of plants. This research conducts a comprehensive comparative analysis between two prominent deep learning algorithms, convolutional neural network (CNN) and DenseNet121, with the goal of enhancing disease identification in tomato plant leaves.
Design/methodology/approach
The dataset employed in this investigation is a fusion of primary data and publicly available data, covering 13 distinct disease labels and a total of 18,815 images for model training. The data pre-processing workflow prioritized activities such as normalizing pixel dimensions, implementing data augmentation and achieving dataset balance, which were subsequently followed by the modeling and testing phases.
Findings
Experimental findings elucidated the superior performance of the DenseNet121 model over the CNN model in disease classification on tomato leaves. The DenseNet121 model attained a training accuracy of 98.27%, a validation accuracy of 87.47% and average recall, precision and F1-score metrics of 87, 88 and 87%, respectively. The ultimate aim was to implement the optimal classifier for a mobile application, namely Tanamin.id, and, therefore, DenseNet121 was the preferred choice.
Originality/value
The integration of private and public data significantly contributes to determining the optimal method. The CNN method achieves a training accuracy of 90.41% and a validation accuracy of 83.33%, whereas the DenseNet121 method excels with a training accuracy of 98.27% and a validation accuracy of 87.47%. The DenseNet121 architecture, comprising 121 layers, a global average pooling (GAP) layer and a dropout layer, showcases its effectiveness. Leveraging categorical_crossentropy as the loss function and utilizing the stochastic gradien descent (SGD) Optimizer with a learning rate of 0.001 guides the course of the training process. The experimental results unequivocally demonstrate the superior performance of DenseNet121 over CNN.
Details
Keywords
Luca Marinelli, Fabio Fiano, Gian Luca Gregori and Lucia Michela Daniele
The purpose of this paper is to investigate the food and beverage automatic retail environment by analysing the impact of planograms, conceived as a visual merchandising practice…
Abstract
Purpose
The purpose of this paper is to investigate the food and beverage automatic retail environment by analysing the impact of planograms, conceived as a visual merchandising practice and shopping time – the time spent making a purchase – as part of food consumer purchasing behaviour to further enrich the debate on the ability of companies to absorb customer knowledge.
Design/methodology/approach
A real-world experiment was conducted using a sample of 27,230 valid observations of consumer purchasing decision-making processes at automatic vending machines (AVMs). Data were collected by a shopper behaviour analytics system that allows for a better understanding of the AVM users' behaviour. Two sets of regressions were run to test the two hypotheses.
Findings
The experimental results demonstrated that planograms – the planned, systematic organisation of products in an AVM – positively impact food purchases. A planogram acts as a mediator in the relationship between shopping time and purchase, resulting in shorter shopping times and more purchases.
Originality/value
This work adds to the customer knowledge literature by focussing on customer behaviour in the food and beverage automated shopping environment. The shopper analytics technology adopted to collect real-time data leads to a better understanding of the purchasing behaviour of AVMs' users and provides new marketing and retail insights into AVMs' performance that retailers can use to improve their marketing strategies.
Details
Keywords
The issue of cybersecurity has been cast as the focal point of a fight between two conflicting governance models: the nation-state model of national security and the global…
Abstract
Purpose
The issue of cybersecurity has been cast as the focal point of a fight between two conflicting governance models: the nation-state model of national security and the global governance model of multi-stakeholder collaboration, as seen in forums like IGF, IETF, ICANN, etc. There is a strange disconnect, however, between this supposed fight and the actual control over cybersecurity “on the ground”. This paper aims to reconnect discourse and control via a property rights approach, where control is located first and foremost in ownership.
Design/methodology/approach
This paper first conceptualizes current governance mechanisms through ownership and property rights. These concepts locate control over internet resources. They also help us understand ongoing shifts in control. Such shifts in governance are actually happening, security governance is being patched left and right, but these arrangements bear little resemblance to either the national security model of states or the global model of multi-stakeholder collaboration. With the conceptualization in hand, the paper then presents case studies of governance that have emerged around specific security externalities.
Findings
While not all mechanisms are equally effective, in each of the studied areas, the author found evidence of private actors partially internalizing the externalities, mostly on a voluntary basis and through network governance mechanisms. No one thinks that this is enough, but it is a starting point. Future research is needed to identify how these mechanisms can be extended or supplemented to further improve the governance of cybersecurity.
Originality/value
This paper bridges together the disconnected research communities on governance and (technical) cybersecurity.
Details