Search results

1 – 10 of over 3000
Article
Publication date: 2 October 2020

Cheng Chen and Honghua Wang

Stimulated by previous reference, which proposed making straight line of regression to test gear gravimetric wear loss sequence distribution, this paper aims to propose using…

Abstract

Purpose

Stimulated by previous reference, which proposed making straight line of regression to test gear gravimetric wear loss sequence distribution, this paper aims to propose using straight line of regression to fit gear gravimetric wear loss sequence based on stationary random process suppose. Faced to that the stationary random sequence suppose had not been proved by previous reference, and that prediction did not present high precision, this paper proposes a method of fitting non-stationary random process probability distribution function.

Design/methodology/approach

Firstly, this paper proposes using weighted sum of Gauss items to fit zero-step approximate probability density. Secondly, for the beginning, this paper uses the method with few Gauss items under low precision. With the amount of points increasing, this paper uses more Gauss items under higher precision, and some Gauss items and some former points are deleted under precision condition. Thirdly, for particle swarm optimization with constraint problem, this paper proposed improved method, and the stop condition is under precision condition.

Findings

In experiment data analysis section, gear wear loss prediction is done by the method proposed by this paper. Compared with the method based on the stationary random sequence suppose by prediction relative error, the method proposed by this paper lowers the relative error whose absolute values are more than 5%, except when the current point sequence number is 2, and retains the relative error, whose absolute values are lower than 5%, still lower than 5%.

Originality/value

Finally, the method proposed by this paper based on non-stationary random sequence suppose is proved to be the better method in gear gravimetric wear loss prediction.

Article
Publication date: 4 June 2020

Ravindu Kahandawa, Niluka Domingo, Gregory Chawynski and S.R. Uma

Reconstruction processes after an earthquake require estimating repair costs to decide on whether to repair or rebuild. This requires an accurate post-earthquake cost estimation

Abstract

Purpose

Reconstruction processes after an earthquake require estimating repair costs to decide on whether to repair or rebuild. This requires an accurate post-earthquake cost estimation tool. Currently, there are no post-earthquake loss estimation models to estimate repair costs accurately. There are loss assessment tools available, namely, HAZUS, performance assessment calculation tool (PACT), seismic performance and loss assessment tool (SLAT) and seismic performance prediction tool, which have not been specifically used for post-earthquake repair cost estimation. This paper aims to focus on identifying factors that need to be considered when upgrading these tools for post-earthquake repair cost estimation.

Design/methodology/approach

The research was conducted as an exploratory study using a literature review, document analysis of the PACT, SLAT and HAZUS software and 18 semi-structured interviews.

Findings

The research identified information sources available for estimation and factors to be considered when developing estimations based on the information sources.

Research limitations/implications

The data was collected from professionals who were involved mostly in housing repair work in New Zealand. Therefore, impact of these repair work factors might vary in other forms of structures such as civil structures include bridges and the country as a result of varying construction details and standards.

Practical implications

The identified factors will be used to improve the loss estimation tools are such as PACT and HAZUS, as well as to develop a post-earthquake repair cost estimation tool.

Originality/value

Currently, the identified factors impacting post-earthquake damage repair cost estimations are not considered in loss estimation tools. Factors identified in this research will help to develop a more accurate cost estimation tool for post-earthquake repair work.

Details

International Journal of Disaster Resilience in the Built Environment, vol. 12 no. 1
Type: Research Article
ISSN: 1759-5908

Keywords

Book part
Publication date: 24 May 2007

Frederic Carluer

“It should also be noted that the objective of convergence and equal distribution, including across under-performing areas, can hinder efforts to generate growth. Contrariwise

Abstract

“It should also be noted that the objective of convergence and equal distribution, including across under-performing areas, can hinder efforts to generate growth. Contrariwise, the objective of competitiveness can exacerbate regional and social inequalities, by targeting efforts on zones of excellence where projects achieve greater returns (dynamic major cities, higher levels of general education, the most advanced projects, infrastructures with the heaviest traffic, and so on). If cohesion policy and the Lisbon Strategy come into conflict, it must be borne in mind that the former, for the moment, is founded on a rather more solid legal foundation than the latter” European Commission (2005, p. 9)Adaptation of Cohesion Policy to the Enlarged Europe and the Lisbon and Gothenburg Objectives.

Details

Managing Conflict in Economic Convergence of Regions in Greater Europe
Type: Book
ISBN: 978-1-84950-451-5

Article
Publication date: 1 August 2005

Adnan Enshassi, Sherif Mohamed and Ibrahim Madi

Estimating is a fundamental part of the construction industry. The success or failure of a project is dependent on the accuracy of several estimates through‐out the course of the…

1756

Abstract

Estimating is a fundamental part of the construction industry. The success or failure of a project is dependent on the accuracy of several estimates through‐out the course of the project. Construction estimating is the compilation and analysis of many items that influence and contribute to the cost of a project. Estimating which is done before the physical performance of the work requires a detailed study and careful analysis of the bidding documents, in order to achieve the most accurate estimate possible of the probable cost consistent with the bidding time available and the accuracy and completeness of the information submitted. Overestimated or underestimated cost has the potential to cause loss to local contracting companies. The objective of this paper is to identify the essential factors and their relative importance that affect accuracy of cost estimation of building contracts in the Gaza strip. The results of analyzing fifty one factors considered in a questionnaire survey concluded that the main factors are: location of the project, segmentation of the Gaza strip and limitation of movements between areas, political situation, and financial status of the owner.

Details

Journal of Financial Management of Property and Construction, vol. 10 no. 2
Type: Research Article
ISSN: 1366-4387

Keywords

Article
Publication date: 1 April 1991

R.T.M. Whipple

Examines the underlying basis of the valuation process at a timewhen the world property markets are experiencing the effects of theglobal recession. Refers to professional…

Abstract

Examines the underlying basis of the valuation process at a time when the world property markets are experiencing the effects of the global recession. Refers to professional criticism in the USA, the UK and Australasia. Advocates a return to first principles in all appraisals and valuations, the value being determined between the supply and demand criteria in any particular market.

Details

Journal of Property Valuation and Investment, vol. 9 no. 4
Type: Research Article
ISSN: 0960-2712

Keywords

Article
Publication date: 8 June 2020

Vandana Bagde and Dethe C. G

A recent innovative technology used in wireless communication is recognized as multiple input multiple output (MIMO) communication system and became popular for quicker data…

Abstract

Purpose

A recent innovative technology used in wireless communication is recognized as multiple input multiple output (MIMO) communication system and became popular for quicker data transmission speed. This technology is being examined and implemented for the latest broadband wireless connectivity networks. Though high-capacity wireless channel is identified, there is still requirement of better techniques to get increased data transmission speed with acceptable reliability. There are two types of systems comprising of multi-antennas placed at transmitting and receiving sides, of which first is diversity technique and another is spatial multiplexing method. By making use of these diversity techniques, the reliability of transmitting signal can be improved. The fundamental method of the diversity is to transform wireless channel such as Rayleigh fading into steady additive white Gaussian noise (AWGN) channel which is devoid of any disastrous fading of the signal. The maximum transmission speed that can be achieved by spatial multiplexing methods is nearly equal to channel capacity of MIMO. Conversely, for diversity methods, the maximum speed of broadcasting is much lower than channel capacity of MIMO. With the advent of space–time block coding (STBC) antenna diversity technique, higher-speed data transmission is achievable for spatially multiplexed multiple input multiple output (SM-MIMO) system. At the receiving end, detection of the signal is a complex task for system which exhibits SM-MIMO. Additionally, a link modification method is implemented to decide appropriate coding and modulation scheme such as space diversity technique STBC to use two-way radio resources efficiently. The proposed work attempts to improve detection of signal at receiving end by employing STBC diversity technique for linear detection methods such as zero forcing (ZF), minimum mean square error (MMSE), ordered successive interference cancellation (OSIC) and maximum likelihood detection (MLD). The performance of MLD has been found to be better than other detection techniques.

Design/methodology/approach

Alamouti's STBC uses two transmit antennas regardless of the number of receiver antennas. The encoding and decoding operation of STBC is shown in the earlier cited diagram. In the following matrix, the rows of each coding scheme represent a different time instant, while the columns represent the transmitted symbols through each different antenna. In this case, the first and second rows represent the transmission at the first and second time instant, respectively. At a time t, the symbol s1 and symbol s2 are transmitted from antenna 1 and antenna 2, respectively. Assuming that each symbol has duration T, then at time t + T, the symbols –s2* and s1*, where (.)* denotes the complex conjugate, are transmitted from antenna 1 and antenna 2, respectively. Case of one receiver antenna: The reception and decoding of the signal depend on the number of receiver antennas available. For the case of one receiver antenna, the received signals are received at antenna 1 , hij is the channel transfer function from the jth transmit antenna and the ith receiver antenna, n1 is a complex random variable representing noise at antenna 1 and x (k) denotes x at time instant k ( at time t + (k – 1)T.

Findings

The results obtained for maximal ratio combining (MRC) with 1 × 4 scheme show that the BER curve drops to 10–4 for signal-to-noise (SNR) ratio of 10 dB, whereas for MRC 1 × 2 scheme, the BER drops down to 10–5 for SNR of 20 dB. Results obtained in Table 1 show that when STBC is employed for MRC with 1 × 2 scheme (one antenna at transmitter node and two antennas at receiver node), BER curve comes down to 0.0076 for Eb/N0 of 12. Similarly, when MRC with 1 × 4 antenna scheme is implemented, BER drops down to 0 for Eb/N0 of 12. Thus, it can be concluded from the obtained graph that the performance of MRC with STBC gives improved results. When STBC technique is used with 3 × 4 scheme, at SNR of 10 dB, BER comes nearer to 10–6 (figure 7.3). It can be concluded from the analytics observed between AWGN and Rayleigh fading channel that for AWGN channel, BER is found to be equal to 0 for SNR value of 13.5 dB, whereas for Rayleigh fading channel, BER is observed nearer to 10–3 for Eb/N0 = 15. Simulation results (in figure 7.2) from the analytics show BER drops to 0 for SNR value of 12 dB.

Research limitations/implications

Optimal design and successful deployment of high-performance wireless networks present a number of technical challenges. These include regulatory limits on useable radio-frequency spectrum and a complex time-varying propagation environment affected by fading and multipath. The effect of multipath fading in wireless systems can be reduced by using antenna diversity. Previous studies show the performance of transmit diversity with narrowband signals using linear equalization, decision feedback equalization, maximum likelihood sequence estimation (MLSE) and spread spectrum signals using a RAKE receiver. The available IC techniques compatible with STBC schemes at transmission require multiple antennas at the receiver. However, if this not a strong constraint at the base station level, it remains a challenge at the handset level due to cost and size limitation. For this reason, SAIC technique, alternative to complex ML multiuser demodulation technique, is still of interest for 4G wireless networks using the MIMO technology and STBC in particular. In a system with characteristics similar to the North American Digital mobile radio standard IS-54 (24.3 K symbols per sec. with an 81 Hz fading rate), adaptive retransmission with time deviation is not practical.

Practical implications

The evaluation of performance in terms of bit error rate and convergence time which estimates that MLD technique outperforms in terms of received SNR and low decoding complexity. MLD technique performs well but when higher number of antennas are used, it requires more computational time and thereby resulting in increased hardware complexity. When MRC scheme is implemented for singe input single output (SISO) system, BER drops down to 10–2 for SNR of 20 dB. Therefore, when MIMO systems are employed for MRC scheme, improved results based on BER versus SNR are obtained and are used for detecting the signal; comparative study based on different techniques is done. Initially ZF detection method is utilized which was then modified to ZF with successive interference cancellation (ZFSIC). When successive interference cancellation scheme is employed for ZFSIC, better performance is observed as compared to the estimation of ML and MMSE. For 2 × 2 scheme with QPSK modulation method, ZFSIC requires more computational time as compared to ZF, MMSE and ML technique. From the obtained results, the conclusion is that ZFSIC gives the improved results as compared to ZF in terms of BER ratio. ZF-based decision statistics can be produced by the detection algorithm for a desired sub-stream from the received vector whichs consist of an interference which occurred from previous transmitted sub-streams. Consequently, a decision on the secondary stream is made and contribution of the noise is regenerated and subtracted from the vector received. With no involvement of interference cancellation, system performance gets reduced but computational cost is saved. While using cancellation, as H is deflated, coefficients of MMSE are recalculated at each iteration. When cancellation is not involved, the computation of MMSE coefficients is done only once, because of H remaining unchanged. For MMSE 4 × 4 BPSK scheme, bit error rate of 10–2 at 30 dB is observed. In general, the most thorough procedure of the detection algorithm is the computation of the MMSE coefficients. Complexity arises in the calculation of the MMSE coefficients, when the antennas at the transmitting side are increased. However, while implementing adaptive MMSE receivers on slow channel fading, it is probable to recover the signal with the complications being linear in the antennas of transmitter node. The performance of MMSE and successive interference cancellation of MMSE are observed for 2 × 2 and 4 × 4 BPSK and QPSK modulation schemes. The drawback of MMSE SIC scheme is that the first detected signal observes the noise interference from (NT-1) signals, while signals processed from every antenna later observe less noisy interference as the process of cancellation progresses. This difficulty could be overcome by using OSIC detection method which uses successive ordering of the processed layers in the decreasing power of the signal or by power allocation to the signal transmitted depending on the order of the processing. By using successive scheme, a computation of NT delay stages is desired to bring out the abandoned process. The work also includes comparison of BER with various modulation schemes and number of antennas involved while evaluating the performance. MLD determines the Euclidean distance among the vector signal received and result of all probable transmitted vector signals with the specified channel H and finds the one with the minimum distance. Estimated results show that higher order of the diversity is observed by employing more antennas at both the receiving and transmitting ends. MLD with 8 × 8 binary phase shift keying (BPSK) scheme offers bit error rate near to 10–4 for SNR (16 dB). By using Altamonti space ti.

Social implications

It should come as no surprise that companies everywhere are pushing to get products to market faster. Missing a market window or a design cycle can be a major setback in a competitive environment. It should be equally clear that this pressure is coming at the same time that companies are pushing towards “leaner” organizations that can do more with less. The trends mentioned earlier are not well supported by current test and measurement equipment, given this increasingly high-pressure design environment: in order to measure signals across multiple domains, multiple pieces of measurement equipment are needed, increasing capital or rental expenses. The methods available for making cross-domain, time-correlated measurements are inefficient, reducing engineering efficiency. When only used on occasion, the learning curve to understand how to use equipment for logic analysis, time domain and RF spectrum measurements often requires an operator to re-learn each piece of separate equipment. The equipment needed to measure wide bandwidth, time-varying spectral signals is expensive, again increasing capital or rental expenses. What is needed is a measurement instrument with a common user interface that integrates multiple measurement capabilities into a single cost-effective tool that can efficiently measure signals in the current wide-bandwidth, time-correlated, cross-domain environments. The market of wireless communication using STBCs has large scope of expansion in India. Therefore, the proposed work has techno-commercial potential and the product can be patented. This project shall in turn be helpful for remote areas of the nearby region particularly in Gadchiroli district and Melghat Tiger reserve project of Amravati district, Nagjira and so on where electricity is not available and there is an all the time problem of coverage in getting the network. In some regions where electricity is available, the shortage is such that they cannot use it for peak hours. In such cases, stand-alone space diversity technique, STBC shall help them to meet their requirements in making connection during coverage problem, thereby giving higher data transmission rates with better QOS (quality of service) with least dropped connections. This trend towards wireless everywhere is causing a profound change in the responsibilities of embedded designers as they struggle to incorporate unfamiliar RF technology into their designs. Embedded designers frequently find themselves needing to solve problems without the proper equipment needed to perform the tasks.

Originality/value

Work is original.

Details

International Journal of Intelligent Unmanned Systems, vol. 10 no. 2/3
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 27 August 2019

Awadhesh Pratap Singh and Chandan Sharma

The purpose of this paper is to compare and analyze the modern productivity estimation techniques, namely, Levinsohn and Petrin (LP, 2003), Ackerberg Caves and Frazer (ACF, 2006)…

Abstract

Purpose

The purpose of this paper is to compare and analyze the modern productivity estimation techniques, namely, Levinsohn and Petrin (LP, 2003), Ackerberg Caves and Frazer (ACF, 2006), Wooldridge (2009) and Mollisi and Rovigatti (MR, 2017) on unit-level data of 32 Indian industries for the period 2009-2015.

Design/methodology/approach

The paper first analyzes different issues encountered in total factor productivity (TFP) measurement. It then categorizes the productivity estimation techniques into three logical generations, namely, traditional, new and advanced. Next, it selects four contemporary estimation techniques, computes the industrial TFP for Indian states by using them and investigates their empirical outcomes. The paper also performs the robustness check to ascertain, which estimation technique is more robust.

Findings

The result indicates that the TFP growth of Indian industries have differed greatly over this seven-years of period, but the estimates are sensitive to the techniques used. Further results suggest that ACF and Wooldridge yield the consistent outcomes as compared to LP and MR. The robustness test confirms Wooldridge to be the most robust contemporary technique for productivity estimation followed by ACF and LP.

Originality/value

To the authors’ knowledge, this is the first study that compares the contemporary productivity estimation techniques. In this backdrop, this paper offers two novelties. First, it uses advanced production estimation techniques to compute TFP of 32 diverse industries of an emerging economy: India. Second, it addresses the fitment of estimation techniques by drawing a comparison and by conducting a robustness test, hence, contributing to the limited literature on comparing contemporary productivity estimation techniques.

Details

Indian Growth and Development Review, vol. 13 no. 1
Type: Research Article
ISSN: 1753-8254

Keywords

Article
Publication date: 27 December 2022

Bright Awuku, Eric Asa, Edmund Baffoe-Twum and Adikie Essegbey

Challenges associated with ensuring the accuracy and reliability of cost estimation of highway construction bid items are of significant interest to state highway transportation…

Abstract

Purpose

Challenges associated with ensuring the accuracy and reliability of cost estimation of highway construction bid items are of significant interest to state highway transportation agencies. Even with the existing research undertaken on the subject, the problem of inaccurate estimation of highway bid items still exists. This paper aims to assess the accuracy of the cost estimation methods employed in the selected studies to provide insights into how well they perform empirically. Additionally, this research seeks to identify, synthesize and assess the impact of the factors affecting highway unit prices because they affect the total cost of highway construction costs.

Design/methodology/approach

This paper systematically searched, selected and reviewed 105 papers from Scopus, Google Scholar, American Society of Civil Engineers (ASCE), Transportation Research Board (TRB) and Science Direct (SD) on conceptual cost estimation of highway bid items. This study used content and nonparametric statistical analyses to determine research trends, identify, categorize the factors influencing highway unit prices and assess the combined performance of conceptual cost prediction models.

Findings

Findings from the trend analysis showed that between 1983 and 2019 North America, Asia, Europe and the Middle East contributed the most to improving highway cost estimation research. Aggregating the quantitative results and weighting the findings using each study's sample size revealed that the average error between the actual and the estimated project costs of Monte-Carlo simulation models (5.49%) performed better compared to the Bayesian model (5.95%), support vector machines (6.03%), case-based reasoning (11.69%), artificial neural networks (12.62%) and regression models (13.96%). This paper identified 41 factors and was grouped into three categories, namely: (1) factors relating to project characteristics; (2) organizational factors and (3) estimate factors based on the common classification used in the selected papers. The mean ranking analysis showed that most of the selected papers used project-specific factors more when estimating highway construction bid items than the other factors.

Originality/value

This paper contributes to the body of knowledge by analyzing and comparing the performance of highway cost estimation models, identifying and categorizing a comprehensive list of cost drivers to stimulate future studies in improving highway construction cost estimates.

Details

Engineering, Construction and Architectural Management, vol. 31 no. 3
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 16 January 2017

Wei Zhang, Xianghong Hua, Kegen Yu, Weining Qiu, Xin Chang, Bang Wu and Xijiang Chen

Nowadays, WiFi indoor positioning based on received signal strength (RSS) becomes a research hotspot due to its low cost and ease of deployment characteristics. To further improve…

Abstract

Purpose

Nowadays, WiFi indoor positioning based on received signal strength (RSS) becomes a research hotspot due to its low cost and ease of deployment characteristics. To further improve the performance of WiFi indoor positioning based on RSS, this paper aims to propose a novel position estimation strategy which is called radius-based domain clustering (RDC). This domain clustering technology aims to avoid the issue of access point (AP) selection.

Design/methodology/approach

The proposed positioning approach uses each individual AP of all available APs to estimate the position of target point. Then, according to circular error probable, the authors search the decision domain which has the 50 per cent of the intermediate position estimates and minimize the radius of a circle via a RDC algorithm. The final estimate of the position of target point is obtained by averaging intermediate position estimates in the decision domain.

Findings

Experiments are conducted, and comparison between the different position estimation strategies demonstrates that the new method has a better location estimation accuracy and reliability.

Research limitations/implications

Weighted k nearest neighbor approach and Naive Bayes Classifier method are two classic position estimation strategies for location determination using WiFi fingerprinting. Both of the two strategies are affected by AP selection strategies and inappropriate selection of APs may degrade positioning performance considerably.

Practical implications

The RDC positioning approach can improve the performance of WiFi indoor positioning, and the issue of AP selection and related drawbacks is avoided.

Social implications

The RSS-based effective WiFi indoor positioning system can makes up for the indoor positioning weaknesses of global navigation satellite system. Many indoor location-based services can be encouraged with the effective and low-cost positioning technology.

Originality/value

A novel position estimation strategy is introduced to avoid the AP selection problem in RSS-based WiFi indoor positioning technology, and the domain clustering technology is proposed to obtain a better accuracy and reliability.

Article
Publication date: 1 March 1988

R.T.M. WHIPPLE

A development project is characterised by many periods of negative cash flows followed by a relatively smaller number of cash surplus periods. Thus, because of the time value of…

Abstract

A development project is characterised by many periods of negative cash flows followed by a relatively smaller number of cash surplus periods. Thus, because of the time value of money, a major risk in real estate development arises from events which extend the periods between the negative and positive cash flows. This paper reviews the traditional methods of evaluating development projects in this context and suggests that more detailed cash flow techniques should be adopted to allow greater flexibility in appraisals, thus accounting for changes in circumstances through sensitivity and scenario analysis. However, even where such techniques are used, developments should not be viewed in isolation and consideration must also be given to the feasibility of a scheme in a corporate framework.

Details

Journal of Valuation, vol. 6 no. 3
Type: Research Article
ISSN: 0263-7480

Keywords

1 – 10 of over 3000