Search results
1 – 10 of over 1000Bing Long, Zhengji Song and Xingwei Jiang
To improve the speed and precise of online monitoring and diagnosis for satellite using satellite telemetry data.Design/methodology/approach – In monitoring system, a fuzzy range…
Abstract
Purpose
To improve the speed and precise of online monitoring and diagnosis for satellite using satellite telemetry data.Design/methodology/approach – In monitoring system, a fuzzy range which gives the probability of alarm for telemetry channels using fuzzy reasoning is outlined. A failure confidence factor is presented to modify the traditional real‐time diagnosis algorithm based on multisignal model to describe the relative failure possibility for suspected components. According to the modified real‐time diagnosis algorithm based on multisignal model, it rapidly generates the states for all the components of the system such as good, bad, suspected and unknown. Then the failure probability for suspected components is obtained by Mamdani fuzzy reasoning algorithm.Findings – The experimental results reveal that the diagnosis system can not only improve diagnosis of speed but also can improve the diagnostic precision by giving failure probability for suspected fault components which may be potential failure components.Research limitations/implications – It requires the clear fault dependency relationship between components and tests.Practical implications – A very useful method for researchers and engineers who are engaged in satellite online monitoring and diagnosis.Originality/value – This paper presents a new method combining multisignal model and fuzzy theory to give the failure probability for suspected components which improves the speed and precision for fault diagnosis.
Details
Keywords
Ibrahim Al Rashdi, Sara Al Balushi, Alia Al Shuaili, Said Al Rashdi, Nadiya Ibrahim Al Bulushi, Asiya Ibrahim Al Kindi, Qasem Al Salmi, Hilal Al Sabti, Nada Korra, Sherif Abaza, Ahmad Nader Fasseeh and Zoltán Kaló
Health technologies are advancing rapidly and becoming more expensive, posing a challenge for financing healthcare systems. Health technology assessment (HTA) improves the…
Abstract
Purpose
Health technologies are advancing rapidly and becoming more expensive, posing a challenge for financing healthcare systems. Health technology assessment (HTA) improves the efficiency of resource allocation by facilitating evidence-informed decisions on the value of health technologies. Our study aims to create a customized HTA roadmap for Oman based on a gap analysis between the current and future status of HTA implementation.
Design/methodology/approach
We surveyed participants of an advanced HTA training program to assess the current state of HTA implementation in Oman and explore long-term goals. A list of draft recommendations was developed in areas with room for improvement. The list was then validated for its feasibility in a round table discussion with senior health policy experts to conclude on specific actions for HTA implementation.
Findings
Survey results aligned well with expert discussions. The round table discussion concluded with a phasic action plan for HTA implementation. In the short term (1–2 years), efforts will focus on building capacity through training programs. For medium-term actions (3–5 years), plans include expanding the HTA unit and introducing multiple cost-effectiveness thresholds while from 6–10 years, publishing of HTA recommendations, critical appraisal reports, and timelines is recommended.
Originality/value
Although the HTA system in Oman is still in its early stages, strong initiatives are being taken for its advancement. This structured approach ensures a comprehensive integration of HTA into the healthcare system, enhancing decision-making and promoting a sustainable, evidence-based system addressing the population’s needs.
Details
Keywords
Herbert Zuze and Melius Weideman
The purpose of this research project was to determine how the three biggest search engines interpret keyword stuffing as a negative design element.
Abstract
Purpose
The purpose of this research project was to determine how the three biggest search engines interpret keyword stuffing as a negative design element.
Design/methodology/approach
This research was based on triangulation between scholar reporting, search engine claims, SEO practitioners and empirical evidence on the interpretation of keyword stuffing. Five websites with varying keyword densities were designed and submitted to Google, Yahoo! and Bing. Two phases of the experiment were done and the response of the search engines was recorded.
Findings
Scholars have indicated different views in respect of spamdexing, characterised by different keyword density measurements in the body text of a webpage. During both phases, almost all the test webpages, including the one with a 97.3 per cent keyword density, were indexed.
Research limitations/implications
Only the three biggest search engines were considered, and monitoring was done for a set time only. The claims that high keyword densities will lead to blacklisting have been refuted.
Originality/value
Websites should be designed with high quality, well‐written content. Even though keyword stuffing is unlikely to lead to search engine penalties, it could deter human visitors and reduce website value.
Details
Keywords
The purpose of this paper is to analyze the readability and level of word complexity of search engine results pages (SERPs) snippets and associated web pages between Google and…
Abstract
Purpose
The purpose of this paper is to analyze the readability and level of word complexity of search engine results pages (SERPs) snippets and associated web pages between Google and Bing.
Design/methodology/approach
The authors employed the Readability Test Tool to analyze the readability and word complexity of 3,000 SERPs snippets and 3,000 associated pages in Google and Bing retrieved on 150 search queries issued by middle school children.
Findings
A significant difference was found in the readability of SERPs snippets and associated web pages between Google and Bing. A significant difference was also observed in the number of complex words in snippets between the two engines but not in associated web pages. At the engine level, the readability of Google and Bing snippets was significantly higher than associated web pages. The readability of Google SERPs snippets was at a much higher level than those of Bing. The readability of snippets in both engines mismatched with the reading comprehension of children in grades 6–8.
Research limitations/implications
The data corpus may be small. Analysis relied on quantitative measures.
Practical implications
Practitioners and other mediators should mitigate the readability issue in SERPs snippets. Researchers should consider text readability and word complexity simultaneously with other factors to obtain the nuanced understanding of young users’ web information behaviors. Additional theoretical and methodological implications are discussed.
Originality/value
This study measured the readability and the level of word complexity embedded in SERPs snippets and compared them to respective web pages in Google and Bing. Findings provide further evidence of the readability issue of SERPs snippets and the need to solve this issue through system design improvements.
Details
Keywords
Sumeer Gul, Sabha Ali and Aabid Hussain
The purpose of this study is to assess the retrieval performance of three search engines, i.e. Google, Yahoo and Bing for navigational queries using two important retrieval…
Abstract
Purpose
The purpose of this study is to assess the retrieval performance of three search engines, i.e. Google, Yahoo and Bing for navigational queries using two important retrieval measures, i.e. precision and relative recall in the field of life science and biomedicine.
Design/methodology/approach
Top three search engines namely Google, Yahoo and Bing were selected on the basis of their ranking as per Alexa, an analytical tool that provides ranking of global websites. Furthermore, the scope of study was confined to those search engines having interface in English. Clarivate Analytics' Web of Science was used for the extraction of navigational queries in the field of life science and biomedicine. Navigational queries (classified as one-word, two-word and three-word queries) were extracted from the keywords of the papers representing the top 100 contributing authors in the select field. Keywords were also checked for the duplication. Two important evaluation parameters, i.e. precision and relative recall were used to calculate the performance of search engines on the navigational queries.
Findings
The mean precision for Google scores high (2.30) followed by Yahoo (2.29) and Bing (1.68), while mean relative recall also scores high for Google (0.36) followed by Yahoo (0.33) and Bing (0.31) respectively.
Research limitations/implications
The study is of great help to the researchers and academia in determining the retrieval efficiency of Google, Yahoo and Bing in terms of navigational query execution in the field of life science and biomedicine. The study can help users to focus on various search processes and the query structuring and its execution across the select search engines for achieving desired result list in a professional search environment. The study can also act as a ready reference source for exploring navigational queries and how these queries can be managed in the context of information retrieval process. It will also help to showcase the retrieval efficiency of various search engines on the basis of subject diversity (life science and biomedicine) highlighting the same in terms of query intention.
Originality/value
Though many studies have been conducted highlighting the retrieval efficiency of search engines the current work is the first of its kind to study the retrieval effectiveness of Google, Yahoo and Bing on navigational queries in the field of life science and biomedicine. The study will help in understanding various methods and approaches to be adopted by the users for the navigational query execution across a professional search environment, i.e. “life science and biomedicine”
Details
Keywords
Ahmet Uyar and Farouk Musa Aliyu
The purpose of this paper is to better understand three main aspects of semantic web search engines of Google Knowledge Graph and Bing Satori. The authors investigated: coverage…
Abstract
Purpose
The purpose of this paper is to better understand three main aspects of semantic web search engines of Google Knowledge Graph and Bing Satori. The authors investigated: coverage of entity types, the extent of their support for list search services and the capabilities of their natural language query interfaces.
Design/methodology/approach
The authors manually submitted selected queries to these two semantic web search engines and evaluated the returned results. To test the coverage of entity types, the authors selected the entity types from Freebase database. To test the capabilities of natural language query interfaces, the authors used a manually developed query data set about US geography.
Findings
The results indicate that both semantic search engines cover only the very common entity types. In addition, the list search service is provided for a small percentage of entity types. Moreover, both search engines support queries with very limited complexity and with limited set of recognised terms.
Research limitations/implications
Both companies are continually working to improve their semantic web search engines. Therefore, the findings show their capabilities at the time of conducting this research.
Practical implications
The results show that in the near future the authors can expect both semantic search engines to expand their entity databases and improve their natural language interfaces.
Originality/value
As far as the authors know, this is the first study evaluating any aspect of newly developing semantic web search engines. It shows the current capabilities and limitations of these semantic web search engines. It provides directions to researchers by pointing out the main problems for semantic web search engines.
Details
Keywords
Cristina I. Font-Julian, José-Antonio Ontalba-Ruipérez and Enrique Orduña-Malea
The purpose of this paper is to determine the effect of the chosen search engine results page (SERP) on the website-specific hit count estimation indicator.
Abstract
Purpose
The purpose of this paper is to determine the effect of the chosen search engine results page (SERP) on the website-specific hit count estimation indicator.
Design/methodology/approach
A sample of 100 Spanish rare disease association websites is analysed, obtaining the website-specific hit count estimation for the first and last SERPs in two search engines (Google and Bing) at two different periods in time (2016 and 2017).
Findings
It has been empirically demonstrated that there are differences between the number of hits returned on the first and last SERP in both Google and Bing. These differences are significant when they exceed a threshold value on the first SERP.
Research limitations/implications
Future studies considering other samples, more SERPs and generating different queries other than website page count (<site>) would be desirable to draw more general conclusions on the nature of quantitative data provided by general search engines.
Practical implications
Selecting a wrong SERP to calculate some metrics (in this case, website-specific hit count estimation) might provide misleading results, comparisons and performance rankings. The empirical data suggest that the first SERP captures the differences between websites better because it has a greater discriminating power and is more appropriate for webometric longitudinal studies.
Social implications
The findings allow improving future quantitative webometric analyses based on website-specific hit count estimation metrics in general search engines.
Originality/value
The website-specific hit count estimation variability between SERPs has been empirically analysed, considering two different search engines (Google and Bing), a set of 100 websites focussed on a similar market (Spanish rare diseases associations), and two annual samples, making this study the most exhaustive on this issue to date.
Details
Keywords
Purpose – The main purpose of this research is to determine whether the performance of natural language (NL) search engines in retrieving exact answers to the NL queries differs…
Abstract
Purpose – The main purpose of this research is to determine whether the performance of natural language (NL) search engines in retrieving exact answers to the NL queries differs from that of keyword searching search engines. Design/methodology/approach – A total of 40 natural language queries were posed to Google and three NL search engines: Ask.com, Hakia and Bing. The first results pages were compared in terms of retrieving exact answer documents and whether they were at the top of the retrieved results, and the precision of exact answer and relevant documents. Findings – Ask.com retrieved exact answer document descriptions at the top of the results list in 60 percent of searches, which was better than the other search engines, but the mean value of the number of exact answer top list documents for three NL search engines (20.67) was a little less than Google's (21). There was no significant difference between the precision for Google and three NL search engines in retrieving exact answer documents for NL queries. Practical implications – The results imply that all NL and keyword searching search engines studied in this research mostly employ similar techniques using keywords of the NL queries, which is far from semantic searching and understanding what the user wants in searching with NL queries. Originality/value – The results shed light into the claims of NL search engines regarding semantic searching of NL queries.
Details
Keywords
Xiuqin Wang, Lanmin Shi, Bing Wang and Mengying Kan
The purpose of this paper is to provide a method that can better evaluate the credit risk (CR) under PPP project finance.
Abstract
Purpose
The purpose of this paper is to provide a method that can better evaluate the credit risk (CR) under PPP project finance.
Design/methodology/approach
The principle to evaluate the CR of PPP projects is to calculate three critical indicators: the default probability (DP), the recovery rate (RR) and the exposure at default (EAD). The RR is determined by qualitative analysis according to Standard & Poor’s Recovery Scale, and the EAD is estimated by NPV analysis. The estimation of the DP is the focus of CR assessment because the future cash flow is not certain, and there are no trading records and market data that can be used to evaluate the credit condition of PPP projects before financial close. The modified CreditMetrics model and Monte Carlo simulation are applied to evaluate the DP, and the application is illustrated by a PPP project finance case.
Findings
First, the proposed method can evaluate the influence of the project’s cash flow uncertainty on the potential loss of the bank. Second, instead of outputting a certain default loss value, the method can derive an interval of the potential loss for the bank. Third, the method can effectively analyze how different repayment schedules and risk preference of banks influence the evaluating result.
Originality/value
The proposed method offers an approach for the bank to value the CR under PPP project finance. The method took into consideration of the uncertainty and other characteristics of PPP project finance, adopted and improved the CreditMetrics model, and provided a possible loss range under different project cash flow volatilities through interval estimation under certain confident level. In addition, the bank’s risk preference is considered in the CR evaluating method proposed in this study where the bank’s risk preference is first investigated in the CR evaluating process of PPP project finance.
Details