Search results

1 – 10 of over 68000
Article
Publication date: 4 August 2023

Can Uzun and Raşit Eren Cangür

This study presents an ontological approach to assess the architectural outputs of generative adversarial networks. This paper aims to assess the performance of the generative…

Abstract

Purpose

This study presents an ontological approach to assess the architectural outputs of generative adversarial networks. This paper aims to assess the performance of the generative adversarial network in representing building knowledge.

Design/methodology/approach

The proposed ontological assessment consists of five steps. These are, respectively, creating an architectural data set, developing ontology for the architectural data set, training the You Only Look Once object detection with labels within the proposed ontology, training the StyleGAN algorithm with the images in the data set and finally, detecting the ontological labels and calculating the ontological relations of StyleGAN-generated pixel-based architectural images. The authors propose and calculate ontological identity and ontological inclusion metrics to assess the StyleGAN-generated ontological labels. This study uses 300 bay window images as an architectural data set for the ontological assessment experiments.

Findings

The ontological assessment provides semantic-based queries on StyleGAN-generated architectural images by checking the validity of the building knowledge representation. Moreover, this ontological validity reveals the building element label-specific failure and success rates simultaneously.

Originality/value

This study contributes to the assessment process of the generative adversarial networks through ontological validity checks rather than only conducting pixel-based similarity checks; semantic-based queries can introduce the GAN-generated, pixel-based building elements into the architecture, engineering and construction industry.

Details

Construction Innovation , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 6 January 2023

Weihao Luo and Yueqi Zhong

The paper aims to transfer the item image of a given clothing product to a corresponding area of the user image. Existing classical methods suffer from unconstrained deformation…

Abstract

Purpose

The paper aims to transfer the item image of a given clothing product to a corresponding area of the user image. Existing classical methods suffer from unconstrained deformation of clothing and occlusion caused by hair or poses, which leads to loss of details in the try-on results. In this paper, the authors present a details-oriented virtual try-on network (DO-VTON), which allows synthesizing high-fidelity try-on images with preserved characteristics of target clothing.

Design/methodology/approach

The proposed try-on network consists of three modules. The fashion parsing module (FPM) is designed to generate the parsing map of a reference person image. The geometric matching module (GMM) warps the input clothing and matches it with the torso area of the reference person guided by the parsing map. The try-on module (TOM) generates the final try-on image. In both FPM and TOM, attention mechanism is introduced to obtain sufficient features, which enhances the performance of characteristics preservation. In GMM, a two-stage coarse-to-fine training strategy with a grid regularization loss (GR loss) is employed to optimize the clothing warping.

Findings

In this paper, the authors propose a three-stage image-based virtual try-on network, DO-VTON, that aims to generate realistic try-on images with extensive characteristics preserved.

Research limitations/implications

The authors’ proposed algorithm can provide a promising tool for image based virtual try-on.

Practical implications

The authors’ proposed method is a technology for consumers to purchase favored clothes online and to reduce the return rate in e-commerce.

Originality/value

Therefore, the authors’ proposed algorithm can provide a promising tool for image based virtual try-on.

Details

International Journal of Clothing Science and Technology, vol. 35 no. 4
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 21 August 2019

Montek Singh, Utkarsh Bajpai, Vijayarajan V. and Surya Prasath

There are various style options available when one buys clothes on online shopping websites, however the availability the new fashion trends or choices require further user…

Abstract

Purpose

There are various style options available when one buys clothes on online shopping websites, however the availability the new fashion trends or choices require further user interaction in generating fashionable clothes. The paper aims to discuss this issue.

Design/methodology/approach

Based on generative adversarial networks (GANs) from the deep learning paradigm, here the authors suggest model system that will take the latest fashion trends and the clothes bought by users as input and generate new clothes. The new set of clothes will be based on trending fashion but at the same time will have attributes of clothes where were bought by the consumer earlier.

Findings

In the proposed machine learning based approach, the clothes generated by the system will personalized for different types of consumers. This will help the manufacturing companies to come up with the designs, which will directly target the customer.

Research limitations/implications

The biggest limitation of the collected data set is that the clothes in the two domains do not belong to a specific category. For instance the vintage clothes data set has coats, dresses, skirts, etc. These different types of clothes are not segregated. Also there is no restriction on the number of images of each type of cloth. There can many images of dresses and only a few for the coats. This can affect the end results. The aim of the paper was to find whether new and desirable clothes can be created from two different domains or not. Analyzing the impact of “the number of images for each class of cloth” is something which is aim to work in future.

Practical implications

The authors believe such personalized experience can increase the sales of fashion stores and here provide the feasibility of such a clothes generation system.

Originality/value

Applying GANs from the deep learning models for generating fashionable clothes.

Details

International Journal of Clothing Science and Technology, vol. 32 no. 2
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 4 March 2019

Viriya Taecharungroj

The purpose of this paper is to use user-generated content (UGC) on social media platforms to infer the possible place brand identities of two famous metropolitan areas in…

3140

Abstract

Purpose

The purpose of this paper is to use user-generated content (UGC) on social media platforms to infer the possible place brand identities of two famous metropolitan areas in Bangkok, Thailand, namely, Khaosan Road and Yaowarat (Bangkok’s Chinatown), both of which are famous for their street vendors and nightlife. These two places are interesting study sites because of recent identity conflicts among their stakeholders. The method developed in this research can help other places to better understand place brand identities and, as such, effectively plan for and manage those places.

Design/methodology/approach

The author used content analysis to study 782 user-generated images on Flickr and 9,633 user-generated textual reviews of Khaosan Road and Yaowarat from TripAdvisor and Google Maps’ Local Guide. MAXQDA was used to code all the images. User-generated textual reviews were studied using Leximancer. The author also introduced a positivity of concept analysis to identify positive and negative components of place brand identity.

Findings

The author developed a place brand identity framework that includes three pillars, namely, place physics, place practices and place personality. Content analysis of the images generated 105 codes and a count of the frequency of the codes that represent place brand identity. Content analysis of textual reviews created the concepts in the three pillars and identified the positive and negative concepts for both places. The results of both image and text analyses showed that street food vending is one of the most salient components of place brand identity for both Khaosan Road and Yaowarat.

Practical implications

The author suggested several place branding strategies for the Bangkok Metropolitan Administration such as turning Khaosan Road into a music scene for both visitors and locals, controlling excessive and aggressive commercialism, sponsoring the production of creative and authentic content, initiating a compelling online campaign that focusses on the items sold in Yaowarat, hosting a spotlight event such as a seafood festival and improving hygiene and walkability.

Originality/value

Both the advancement of digital technologies and the complexity of stakeholders create a need for empirical studies on place branding involving the participation of the widest possible range of stakeholders and studies on the influence of social media. This research is the first to use both image and text analyses to study place brand identity from UGC. The use of both analyses allows the two methods to complement one another while mitigating the weaknesses of each.

Details

Journal of Place Management and Development, vol. 12 no. 1
Type: Research Article
ISSN: 1753-8335

Keywords

Article
Publication date: 2 May 2019

Hadi Mahami, Farnad Nasirzadeh, Ali Hosseininaveh Ahmadabadian, Farid Esmaeili and Saeid Nahavandi

This paper aims to propose an automatic imaging network design to improve the efficiency and accuracy of automated construction progress monitoring. The proposed method will…

Abstract

Purpose

This paper aims to propose an automatic imaging network design to improve the efficiency and accuracy of automated construction progress monitoring. The proposed method will address two shortcomings of the previous studies, including the large number of captured images required and the incompleteness and inaccuracy of generated as-built models.

Design/methodology/approach

Using the proposed method, the number of required images is minimized in two stages. In the first stage, the manual photogrammetric network design is used to decrease the number of camera stations considering proper constraints. Then the image acquisition is done and the captured images are used to generate 3D points cloud model. In the second stage, a new software for automatic imaging network design is developed and used to cluster and select the optimal images automatically, using the existing dense points cloud model generated before, and the final optimum camera stations are determined. Therefore, the automated progress monitoring can be done by imaging at the selected camera stations to produce periodic progress reports.

Findings

The achieved results show that using the proposed manual and automatic imaging network design methods, the number of required images is decreased by 65 and 75 per cent, respectively. Moreover, the accuracy and completeness of points cloud reconstruction is improved and the quantity of performed work is determined with the accuracy, which is close to 100 per cent.

Practical implications

It is believed that the proposed method may present a novel and robust tool for automated progress monitoring using unmanned aerial vehicles and based on photogrammetry and computer vision techniques. Using the proposed method, the number of required images is minimized, and the accuracy and completeness of points cloud reconstruction is improved.

Originality/value

To generate the points cloud reconstruction based on close-range photogrammetry principles, more than hundreds of images must be captured and processed, which is time-consuming and labor-intensive. There has been no previous study to reduce the large number of required captured images. Moreover, lack of images in some areas leads to an incomplete or inaccurate model. This research resolves the mentioned shortcomings.

Article
Publication date: 1 March 2024

Wei-Zhen Wang, Hong-Mei Xiao and Yuan Fang

Nowadays, artificial intelligence (AI) technology has demonstrated extensive applications in the field of art design. Attribute editing is an important means to realize clothing…

Abstract

Purpose

Nowadays, artificial intelligence (AI) technology has demonstrated extensive applications in the field of art design. Attribute editing is an important means to realize clothing style and color design via computer language, which aims to edit and control the garment image based on the specified target attributes while preserving other details from the original image. The current image attribute editing model often generates images containing missing or redundant attributes. To address the problem, this paper aims for a novel design method utilizing the Fashion-attribute generative adversarial network (AttGAN) model was proposed for image attribute editing specifically tailored to women’s blouses.

Design/methodology/approach

The proposed design method primarily focuses on optimizing the feature extraction network and loss function. To enhance the feature extraction capability of the model, an increase in the number of layers in the feature extraction network was implemented, and the structure similarity index measure (SSIM) loss function was employed to ensure the independent attributes of the original image were consistent. The characteristic-preserving virtual try-on network (CP_VTON) dataset was used for train-ing to enable the editing of sleeve length and color specifically for women’s blouse.

Findings

The experimental results demonstrate that the optimization model’s generated outputs have significantly reduced problems related to missing attributes or visual redundancy. Through a comparative analysis of the numerical changes in the SSIM and peak signal-to-noise ratio (PSNR) before and after the model refinement, it was observed that the improved SSIM increased substantially by 27.4%, and the PSNR increased by 2.8%, serving as empirical evidence of the effectiveness of incorporating the SSIM loss function.

Originality/value

The proposed algorithm provides a promising tool for precise image editing of women’s blouses based on the GAN. This introduces a new approach to eliminate semantic expression errors in image editing, thereby contributing to the development of AI in clothing design.

Details

International Journal of Clothing Science and Technology, vol. 36 no. 2
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 28 September 2012

Jiang Hongxia, Wang Hongfu, Liu Jihong and Pan Ruru

The purpose of this paper is to research an auto generation method of developing FFT image and image pattern for textile based on FFT theory.

7805

Abstract

Purpose

The purpose of this paper is to research an auto generation method of developing FFT image and image pattern for textile based on FFT theory.

Design/methodology/approach

In the research, a program was developed to generate FFT images using the FFT algorithm. The process of auto generation FFT image can be divided into the following steps: initializing the size of image, painting source image, giving the color pattern, transforming FFT image by FFT, designing mask template, and image pattern combining by point diagram. These image patterns can be used to apply on the textile.

Findings

The results showed that the FFT images can be used for textile designer directly. The FFT images can also be used as elements for textile image design such as clothing. The auto generation FFT images by FFT reflect different modern sense of beauty from traditional geometric images.

Research limitations/implications

There are many parameters that affect the art effect of FFT image generating by FFT algorithm. However, there is no discussion about the relationship between the parameters and art effect. Three dimension effects are not obvious in the simulation results by virtual clothing software.

Originality/value

The paper presents a fundamental understanding of the property of the FFT image generating by FFT algorithm and application method of the image pattern in clothing.

Details

International Journal of Clothing Science and Technology, vol. 24 no. 5
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 3 June 2021

Hao Wu, Quanquan Lv, Jiankang Yang, Xiaodong Yan and Xiangrong Xu

This paper aims to propose a deep learning model that can be used to expand the number of samples. In the process of manufacturing and assembling electronic components on the…

Abstract

Purpose

This paper aims to propose a deep learning model that can be used to expand the number of samples. In the process of manufacturing and assembling electronic components on the printed circuit board in the surface mount technology production line, it is relatively easy to collect non-defective samples, but it is difficult to collect defective samples within a certain period of time. Therefore, the number of non-defective components is much greater than the number of defective components. In the process of training the defect detection method of electronic components based on deep learning, a large number of defective and non-defective samples need to be input at the same time.

Design/methodology/approach

To obtain enough electronic components samples required for training, a method based on the generative adversarial network (GAN) to generate training samples is proposed, and then the generated samples and real samples are used to train the convolutional neural networks (CNN) together to obtain the best detection results.

Findings

The experimental results show that the defect recognition method using GAN and CNN can not only expand the sample images of the electronic components required for the training model but also accurately classify the defect types.

Originality/value

To solve the problem of unbalanced sample types in component inspection, a GAN-based method is proposed to generate different types of training component samples and then the generated samples and real samples are used to train the CNN together to obtain the best detection results.

Article
Publication date: 15 January 2020

Hay Wong

Electron beam additive manufacturing (EBAM) is a popular additive manufacturing (AM) technique used by many industrial sectors. In EBAM process monitoring, data analysis is…

Abstract

Purpose

Electron beam additive manufacturing (EBAM) is a popular additive manufacturing (AM) technique used by many industrial sectors. In EBAM process monitoring, data analysis is focused on information extraction directly from the raw data collected in-process, i.e. thermal/optical/electronic images, and the comparison between the collected data and the computed tomography/microscopy images generated after the EBAM process. This paper aims to postulate that a stack of bitmaps could be generated from the computer-aided design (CAD) at a range of Z heights and user-defined region of interest during file preparation of the EBAM process, and serve as a reference image set.

Design/methodology/approach

Comparison between that and the workpiece images collected during the EBAM process could then be used for quality assessment purposes. In spite of the extensive literature on CAD slicing and contour generation for AM process preparation, the method of bitmap generation from the CAD model at different field of views (FOVs) has not been disseminated in detail. This article presents a piece of custom CAD-bitmap generation software and an experiment demonstrating the application of the software alongside an electronic imaging system prototype.

Findings

Results show that the software is capable of generating binary bitmaps with user-defined Z heights, image dimensions and image FOVs from the CAD model; and can generate reference bitmaps to work with workpiece electronic images for potential pixel-to-pixel image comparison.

Originality/value

It is envisaged that this CAD-bitmap image generation ability opens up new opportunities in quality assessment for the in-process monitoring of the EBAM process.

Article
Publication date: 21 September 2020

Kwonsang Sohn, Christine Eunyoung Sung, Gukwon Koo and Ohbyung Kwon

This study examines consumers' evaluations of product consumption values, purchase intentions and willingness to pay for fashion products designed using generative adversarial…

5929

Abstract

Purpose

This study examines consumers' evaluations of product consumption values, purchase intentions and willingness to pay for fashion products designed using generative adversarial network (GAN), an artificial intelligence technology. This research investigates differences between consumers' evaluations of a GAN-generated product and a non-GAN-generated product and tests whether disclosing the use of GAN technology affects consumers' evaluations.

Design/methodology/approach

Sample products were developed as experimental stimuli using cycleGAN. Data were collected from 163 members of Generation Y. Participants were assigned to one of the three experimental conditions (i.e. non-GAN-generated images, GAN-generated images with disclosure and GAN-generated images without disclosure). Regression analysis and ANOVA were used to test the hypotheses.

Findings

Functional, social and epistemic consumption values positively affect willingness to pay in the GAN-generated products. Relative to non-GAN-generated products, willingness to pay is significantly higher for GAN-generated products. Moreover, evaluations of functional value, emotional value and willingness to pay are highest when GAN technology is used, but not disclosed.

Originality/value

This study evaluates the utility of GANs from consumers' perspective based on the perceived value of GAN-generated product designs. Findings have practical implications for firms that are considering using GANs to develop products for the retail fashion market.

Details

International Journal of Retail & Distribution Management, vol. 49 no. 1
Type: Research Article
ISSN: 0959-0552

Keywords

1 – 10 of over 68000