Search results
1 – 10 of over 1000Feng Cui, Dong Gao and Jianhua Zheng
The main reason for the low accuracy of magnetometer-based autonomous orbit determination is the coarse accuracy of the geomagnetic field model. Furthermore, the geomagnetic field…
Abstract
Purpose
The main reason for the low accuracy of magnetometer-based autonomous orbit determination is the coarse accuracy of the geomagnetic field model. Furthermore, the geomagnetic field model error increases obviously during geomagnetic storms, which can still further reduce the navigation accuracy. The purpose of this paper is to improve the accuracy of magnetometer-based autonomous orbit determination during geomagnetic storms.
Design/methodology/approach
In this paper, magnetometer-based autonomous orbit determination via a measurement differencing extended Kalman filter (MDEKF) is studied. The MDEKF algorithm can effectively remove the time-correlated portion of the measurement error and thus can evidently improve the accuracy of magnetometer-based autonomous orbit determination during geomagnetic storms. Real flight data from Swarm A are used to evaluate the performance of the MDEKF algorithm presented in this study. A performance comparison between the MDEKF algorithm and an extended Kalman filter (EKF) algorithm is investigated for different geomagnetic storms and sampling intervals.
Findings
The simulation results show that the MDEKF algorithm is superior to the EKF algorithm in terms of estimation accuracy and stability with a short sampling interval during geomagnetic storms. In addition, as the size of the geomagnetic storm increases, the advantages of the MDEKF algorithm over the EKF algorithm become more obvious.
Originality/value
The algorithm in this paper can improve the real-time accuracy of magnetometer-based autonomous orbit determination during geomagnetic storms with a low computational burden and is very suitable for low-orbit micro- and nano-satellites.
Details
Keywords
Guangrun Sheng, Xixiang Liu, Zixuan Wang, Wenhao Pu, Xiaoqiang Wu and Xiaoshuang Ma
This paper aims to present a novel transfer alignment method based on combined double-time observations with velocity and attitude for ships’ poor maneuverability to address the…
Abstract
Purpose
This paper aims to present a novel transfer alignment method based on combined double-time observations with velocity and attitude for ships’ poor maneuverability to address the system errors introduced by flexural deformation and installing which are difficult to calibrate.
Design/methodology/approach
Based on velocity and attitude matching, redesigning and deducing Kalman filter model by combining double-time observation. By introducing the sampling of the previous update cycle of the strapdown inertial navigation system (SINS), current observation subtracts previous observation are used as measurements for transfer alignment filter, system error in measurement introduced by deformation and installing can be effectively removed.
Findings
The results of simulations and turntable tests show that when there is a system error, the proposed method can improve alignment accuracy, shorten the alignment process and not require any active maneuvers or additional sensor equipment.
Originality/value
Calibrating those deformations and installing errors during transfer alignment need special maneuvers along different axes, which is difficult to fulfill for ships’ poor maneuverability. Without additional sensor equipment and active maneuvers, the system errors in attitude measurement can be eliminated by the proposed algorithms, meanwhile improving the accuracy of the shipboard SINS transfer alignment.
Details
Keywords
Irfan Sayim and Dan Zhang
The purpose of this work is to obtain an overbounded broadcast sigma from actual (non-Gaussian) correction error distribution under the stringent navigation integrity requirements…
Abstract
Purpose
The purpose of this work is to obtain an overbounded broadcast sigma from actual (non-Gaussian) correction error distribution under the stringent navigation integrity requirements for aircraft precision approach and landing.
Design/methodology/approach
Approach is statistically to overbound satellite pseudorange correction error distribution with the use of numerical solution of Fisher-Z transformation. Inflation factors for overbounding broadcast sigma are extracted from Fisher-Z transformation based on measured correlation and counted independent identically distributed (iid) sample sizes of true empirical data.
Findings
New overbounded broadcast sigma values for eight long-pass satellites were obtained based on measured actual empirical data and ensured integrity risk at 10−8 probability level. Proposed methodology successfully overbounds ground reflection multipath-type systematic and temporal errors sources.
Originality/value
This paper introduced a new method of accounting for ground reflection multipath for local area augmentation system/ground-based augmentation system navigation integrity. The method is also applicable to statistically overbound any other serially correlated temporal variation in measured data if both correlation values and finite iid sample sizes are known.
Details
Keywords
Using data from the U.S. National Longitudinal Study of Adolescent Health, this chapter investigates the impact of individual drug use on robbery, burglary, theft, and damaging…
Abstract
Using data from the U.S. National Longitudinal Study of Adolescent Health, this chapter investigates the impact of individual drug use on robbery, burglary, theft, and damaging property for juveniles. Using a variety of fixed-effects models that exploit variations over time and between siblings and twins, the results indicate that drug use has a significant impact on the propensity to commit crime. We find that the median impact of cocaine use on the propensity to commit various types of crimes is 11 percentage points. The impact of using inhalants or other drugs is an increase in the propensity to commit crime by 7 percentage points, respectively.
B. Brian Lee, Eric Press and Byeonghee [Ben] Choi
This paper investigates distortions in financial statements that arise from employing capital assets. Use of historical cost depreciation tends to overstate earnings because of…
Abstract
This paper investigates distortions in financial statements that arise from employing capital assets. Use of historical cost depreciation tends to overstate earnings because of inflation effects, which in turn misrepresents firms' capacities to expand operations or to distribute dividends. We argue that the financial statement effects of inflation can be traced to two main sources: understated depreciation, and interest expense. Depending on a firm's capital structure choices, the distortion from historical cost depreciation is heightened or mitigated. Measurement errors in accounting numbers obscure the relation between price and earnings. We develop value relevant adjustments that enhance the informativeness of earnings. We also show that the effects of measurement errors from using historical cost depreciation are most pronounced in firms that carry lower levels of debt.
Kamil Krasuski and Janusz Ćwiklak
The purpose of this paper is to present the problem of implementation of the differential global navigation satellite system (DGNSS) differential technique for aircraft accuracy…
Abstract
Purpose
The purpose of this paper is to present the problem of implementation of the differential global navigation satellite system (DGNSS) differential technique for aircraft accuracy positioning. The paper particularly focuses on identification and an analysis of the accuracy of aircraft positioning for the DGNSS measuring technique.
Design/methodology/approach
The investigation uses the DGNSS method of positioning, which is based on using the model of single code differences for global navigation satellite system (GNSS) observations. In the research experiment, the authors used single-frequency code observations in the global positioning system (GPS)/global navigation satellite system (GLONASS) system from the on-board receiver Topcon HiperPro and the reference station REF1 (reference station for the airport military EPDE in Deblin in south-eastern Poland). The geodetic Topcon HiperPro receiver was installed in Cessna 172 plane in the aviation test. The paper presents the new methodology in the DGNSS solution in air navigation. The aircraft position was estimated using a “weighted mean” scheme for differential global positioning system and differential global navigation satellite system solution, respectively. The final resultant position of aircraft was compared with precise real-time kinematic – on the fly solution.
Findings
In the investigations it was specified that the average accuracy of positioning the aircraft Cessna 172 in the geocentric coordinates XYZ equals approximately: +0.03 ÷ +0.33 m along the x-axis, −0.02 ÷ +0.14 m along the y-axis and approximately +0.02 ÷ −0.15 m along the z-axis. Moreover, the root mean square errors determining the measure of the accuracy of positioning of the Cessna 172 for the DGNSS differential technique in the geocentric coordinates XYZ, are below 1.2 m.
Research limitations/implications
In research, the data from GNSS onboard receiver and also GNSS reference receiver are needed. In addition, the pseudo-range corrections from the base stations were applied in the observation model of the DGNSS solution.
Practical implications
The presented research method can be used in a ground based augmentation system (GBAS) augmentation system, whereas the GBAS system is still not applied in Polish aviation.
Social implications
The paper is destined for people who work in the area of aviation and air transport.
Originality/value
The study presents the DGNSS differential technique as a precise method for recovery of aircraft position in civil aviation and this method can be also used in the positioning of aircraft based on GPS and GLONASS code observations.
Abstract
Purpose
The purpose of this paper is to propose an integrated approach to modeling and measuring supply chain performance and stability using system dynamics (SD) and the autoregressive integrated moving average (ARIMA).
Design/methodology/approach
SD and ARIMA models were developed, respectively, for modeling and measuring supply chain performance and for further analyzing and projecting supply chain stability for long‐term management. A case study from a typical semiconductor equipment manufacturing company is used to illustrate and validate the proposed method.
Findings
Effectiveness and efficiency, with six corresponding indicators (product reliability, employee fulfillment, customer fulfillment, on‐time delivery, profit growth, and working efficiency), were found to be the most significant factors in the performance of the supply chain. The results of the combined model provide evidence that supply chain performance of the case company is up to standard (average OPIN=0.64) and is considered stable, but still far from outstanding. Continuous improvement, especially in supply chain efficiency, is suggested in order to maximize performance.
Originality/value
This integrated approach is innovative and creates a new way for other disciplines. This study provides a practical and easy‐to‐use model that enables senior and top management decision makers and operation managers involved in the supply chain to assess, forecast, and take anticipatory action so that the supply chain can experience improvement in a timesaving and effective manner and achieve excellence in performance.
Details
Keywords
Segun Thompson Bolarinwa, Olufemi Bodunde Obembe and Clement Olaniyi
The purpose of this paper is to re-examine the determinants of bank profitability in Nigeria. Specifically, the study investigates the effect of managerial cost efficiency on bank…
Abstract
Purpose
The purpose of this paper is to re-examine the determinants of bank profitability in Nigeria. Specifically, the study investigates the effect of managerial cost efficiency on bank profitability. Also, since there exist mixed results and controversies in the literature, in both developed and developing countries, regarding the effect of efficiency on bank profitability, this study employs the standard measure of efficiency. In addition, the work incorporates the role of persistence, which is often neglected in the literature in developing countries.
Design/methodology/approach
This study employs system generalized method of moments.
Findings
The findings, using the case of Nigeria, show that cost efficiency is a strong determinant of bank profitability in developing countries. In addition, the profitability of banks in Nigeria persists over time; hence, the industry is fairly competitive.
Research limitations/implications
The recent policies of banking industry recapitalization meant to increase profitability and stability in Nigeria and other African countries’ banking industry will not be effective if the issue of managerial efficiency is not properly addressed.
Practical implications
Improving the banking managerial efficiency will positively reduce bad loans, hence leading to the stability in the banking system.
Originality/value
The authors introduce efficiency using standard measure of stochastic frontier analysis for its measurement. Also, this study introduces the role of persistence in the literature in developing countries.
Details
Keywords
Leif Edvinsson, Brendan Kitts and Tord Beding
A methodology (based on multi‐dimensional scaling and mathematical statistics) is introduced that reduces the high dimensionality of “IC‐differencing components” into a 3‐D…
Abstract
A methodology (based on multi‐dimensional scaling and mathematical statistics) is introduced that reduces the high dimensionality of “IC‐differencing components” into a 3‐D dimensional representation – the digital IC‐landscape. Building and maintaining a digital IC‐landscape supports systematically: pedagogical display of IC complexity, migration of IC‐affecting knowledge, exploratory retrieval of high IC‐efficiency, investment planning and forecasting. In this project, 11 companies – with a total of 20‐64 “essential variables” and “free parameters” – have been analyzed and results of the study reported.
Details
Keywords
This paper examines the incidence of measurement error in wage data on the estimation of returns to seniority. Earnings surveys collect wage data through questions pertaining to…
Abstract
This paper examines the incidence of measurement error in wage data on the estimation of returns to seniority. Earnings surveys collect wage data through questions pertaining to earnings and hours over a given period of time (year, week) or through direct reports of hourly wages. Comparing results for different wage variables from the panel study of income dynamics (PSID), it is shown that estimated returns to seniority are very sensitive to the type of wage data used. Estimates based on yearly reports are typically twice as large as those using direct reports. Two sources account for this discrepancy. First, the inclusion of earnings from secondary jobs and overtime in the PSID annual earnings data tends to overestimate returns to seniority. Second, hourly wages computed from yearly measures include important measurement errors that tend to bias coefficients upward.
Details