Search results
1 – 5 of 5Beatriz Guzmán-Pérez, Javier Mendoza-Jiménez and María Victoria Pérez-Monteverde
This study aims to demonstrate the derivation of social sustainability metrics that guide the decision-making of hotel managers regarding sustainability strategies based on the…
Abstract
Purpose
This study aims to demonstrate the derivation of social sustainability metrics that guide the decision-making of hotel managers regarding sustainability strategies based on the case study of Hotel Tigaiga in the Canary Islands, using a noninstrumental approach of the stakeholder theory.
Design/methodology/approach
The analytic–synthetic method of integrated social value (ISV) was used. Data were collected through semi-structured interviews with the stakeholders’ representatives, direct observations and relevant documents.
Findings
Metrics referring to hotel outputs valued by stakeholders and expressed in monetary terms were obtained.
Research limitations/implications
The findings cannot be directly applied to a similar hotel. Applying the ISV model to a set of similar hotels to standardize outputs and proxies is necessary.
Practical implications
The results can guide efforts to increase the effectiveness and efficiency of Hotel Tigaiga’s social sustainability strategies.
Originality/value
Research on measuring the sustainability of hotels in terms of generating maximum value for society is limited. This study is unique because it demonstrates the process of deriving comprehensible indicators to guide hotel managers toward social sustainability.
Details
Keywords
Janani Balakumar and S. Vijayarani Mohan
Owing to the huge volume of documents available on the internet, text classification becomes a necessary task to handle these documents. To achieve optimal text classification…
Abstract
Purpose
Owing to the huge volume of documents available on the internet, text classification becomes a necessary task to handle these documents. To achieve optimal text classification results, feature selection, an important stage, is used to curtail the dimensionality of text documents by choosing suitable features. The main purpose of this research work is to classify the personal computer documents based on their content.
Design/methodology/approach
This paper proposes a new algorithm for feature selection based on artificial bee colony (ABCFS) to enhance the text classification accuracy. The proposed algorithm (ABCFS) is scrutinized with the real and benchmark data sets, which is contrary to the other existing feature selection approaches such as information gain and χ2 statistic. To justify the efficiency of the proposed algorithm, the support vector machine (SVM) and improved SVM classifier are used in this paper.
Findings
The experiment was conducted on real and benchmark data sets. The real data set was collected in the form of documents that were stored in the personal computer, and the benchmark data set was collected from Reuters and 20 Newsgroups corpus. The results prove the performance of the proposed feature selection algorithm by enhancing the text document classification accuracy.
Originality/value
This paper proposes a new ABCFS algorithm for feature selection, evaluates the efficiency of the ABCFS algorithm and improves the support vector machine. In this paper, the ABCFS algorithm is used to select the features from text (unstructured) documents. Although, there is no text feature selection algorithm in the existing work, the ABCFS algorithm is used to select the data (structured) features. The proposed algorithm will classify the documents automatically based on their content.
Details
Keywords
Sreelakshmi D. and Syed Inthiyaz
Pervasive health-care computing applications in medical field provide better diagnosis of various organs such as brain, spinal card, heart, lungs and so on. The purpose of this…
Abstract
Purpose
Pervasive health-care computing applications in medical field provide better diagnosis of various organs such as brain, spinal card, heart, lungs and so on. The purpose of this study is to find brain tumor diagnosis using Machine learning (ML) and Deep Learning(DL) techniques. The brain diagnosis process is an important task to medical research which is the most prominent step for providing the treatment to patient. Therefore, it is important to have high accuracy of diagnosis rate so that patients easily get treatment from medical consult. There are many earlier investigations on this research work to diagnose brain diseases. Moreover, it is necessary to improve the performance measures using deep and ML approaches.
Design/methodology/approach
In this paper, various brain disorders diagnosis applications are differentiated through following implemented techniques. These techniques are computed through segment and classify the brain magnetic resonance imaging or computerized tomography images clearly. The adaptive median, convolution neural network, gradient boosting machine learning (GBML) and improved support vector machine health-care applications are the advance methods used to extract the hidden features and providing the medical information for diagnosis. The proposed design is implemented on Python 3.7.8 software for simulation analysis.
Findings
This research is getting more help for investigators, diagnosis centers and doctors. In each and every model, performance measures are to be taken for estimating the application performance. The measures such as accuracy, sensitivity, recall, F1 score, peak-to-signal noise ratio and correlation coefficient have been estimated using proposed methodology. moreover these metrics are providing high improvement compared to earlier models.
Originality/value
The implemented deep and ML designs get outperformance the methodologies and proving good application successive score.
Details
Keywords
Maedeh Gholamazad, Jafar Pourmahmoud, Alireza Atashi, Mehdi Farhoudi and Reza Deljavan Anvari
A stroke is a serious, life-threatening condition that occurs when the blood supply to a part of the brain is cut off. The earlier a stroke is treated, the less damage is likely…
Abstract
Purpose
A stroke is a serious, life-threatening condition that occurs when the blood supply to a part of the brain is cut off. The earlier a stroke is treated, the less damage is likely to occur. One of the methods that can lead to faster treatment is timely and accurate prediction and diagnosis. This paper aims to compare the binary integer programming-data envelopment analysis (BIP-DEA) model and the logistic regression (LR) model for diagnosing and predicting the occurrence of stroke in Iran.
Design/methodology/approach
In this study, two algorithms of the BIP-DEA and LR methods were introduced and key risk factors leading to stroke were extracted.
Findings
The study population consisted of 2,100 samples (patients) divided into six subsamples of different sizes. The classification table of each algorithm showed that the BIP-DEA model had more reliable results than the LR for the small data size. After running each algorithm, the BIP-DEA and LR algorithms identified eight and five factors as more effective risk factors and causes of stroke, respectively. Finally, predictive models using the important risk factors were proposed.
Originality/value
The main objective of this study is to provide the integrated BIP-DEA algorithm as a fast, easy and suitable tool for evaluation and prediction. In fact, the BIP-DEA algorithm can be used as an alternative tool to the LR model when the sample size is small. These algorithms can be used in various fields, including the health-care industry, to predict and prevent various diseases before the patient’s condition becomes more dangerous.
Details
Keywords
Lokesh Singh, Rekh Ram Janghel and Satya Prakash Sahu
The study aims to cope with the problems confronted in the skin lesion datasets with less training data toward the classification of melanoma. The vital, challenging issue is the…
Abstract
Purpose
The study aims to cope with the problems confronted in the skin lesion datasets with less training data toward the classification of melanoma. The vital, challenging issue is the insufficiency of training data that occurred while classifying the lesions as melanoma and non-melanoma.
Design/methodology/approach
In this work, a transfer learning (TL) framework Transfer Constituent Support Vector Machine (TrCSVM) is designed for melanoma classification based on feature-based domain adaptation (FBDA) leveraging the support vector machine (SVM) and Transfer AdaBoost (TrAdaBoost). The working of the framework is twofold: at first, SVM is utilized for domain adaptation for learning much transferrable representation between source and target domain. In the first phase, for homogeneous domain adaptation, it augments features by transforming the data from source and target (different but related) domains in a shared-subspace. In the second phase, for heterogeneous domain adaptation, it leverages knowledge by augmenting features from source to target (different and not related) domains to a shared-subspace. Second, TrAdaBoost is utilized to adjust the weights of wrongly classified data in the newly generated source and target datasets.
Findings
The experimental results empirically prove the superiority of TrCSVM than the state-of-the-art TL methods on less-sized datasets with an accuracy of 98.82%.
Originality/value
Experiments are conducted on six skin lesion datasets and performance is compared based on accuracy, precision, sensitivity, and specificity. The effectiveness of TrCSVM is evaluated on ten other datasets towards testing its generalizing behavior. Its performance is also compared with two existing TL frameworks (TrResampling, TrAdaBoost) for the classification of melanoma.
Details