Search results
1 – 10 of over 3000Nathan Parker, Jonathan Alt, Samuel Buttrey and Jeffrey House
This research develops a data-driven statistical model capable of predicting a US Army Reserve (USAR) unit staffing levels based on unit location demographics. This model provides…
Abstract
Purpose
This research develops a data-driven statistical model capable of predicting a US Army Reserve (USAR) unit staffing levels based on unit location demographics. This model provides decision makers an assessment of a proposed station location’s ability to support a unit’s personnel requirements from the local population.
Design/methodology/approach
This research first develops an allocation method to overcome challenges caused by overlapping unit boundaries to prevent over-counting the population. Once populations are accurately allocated to each location, we then then develop and compare the performance of statistical models to estimate a location’s likelihood of meeting staffing requirements.
Findings
This research finds that local demographic factors prove essential to a location’s ability to meet staffing requirements. We recommend that the USAR and US Army Recruiting Command (USAREC) use the logistic regression model developed here to support USAR unit stationing decisions; this should improve the ability of units to achieve required staffing levels.
Originality/value
This research meets a direct request from the USAREC, in conjunction with the USAR, for assistance in developing models to aid decision makers during the unit stationing process.
Details
Keywords
Edmund Baffoe-Twum, Eric Asa and Bright Awuku
Background: Geostatistics focuses on spatial or spatiotemporal datasets. Geostatistics was initially developed to generate probability distribution predictions of ore grade in the…
Abstract
Background: Geostatistics focuses on spatial or spatiotemporal datasets. Geostatistics was initially developed to generate probability distribution predictions of ore grade in the mining industry; however, it has been successfully applied in diverse scientific disciplines. This technique includes univariate, multivariate, and simulations. Kriging geostatistical methods, simple, ordinary, and universal Kriging, are not multivariate models in the usual statistical function. Notwithstanding, simple, ordinary, and universal kriging techniques utilize random function models that include unlimited random variables while modeling one attribute. The coKriging technique is a multivariate estimation method that simultaneously models two or more attributes defined with the same domains as coregionalization.
Objective: This study investigates the impact of populations on traffic volumes as a variable. The additional variable determines the strength or accuracy obtained when data integration is adopted. In addition, this is to help improve the estimation of annual average daily traffic (AADT).
Methods procedures, process: The investigation adopts the coKriging technique with AADT data from 2009 to 2016 from Montana, Minnesota, and Washington as primary attributes and population as a controlling factor (second variable). CK is implemented for this study after reviewing the literature and work completed by comparing it with other geostatistical methods.
Results, observations, and conclusions: The Investigation employed two variables. The data integration methods employed in CK yield more reliable models because their strength is drawn from multiple variables. The cross-validation results of the model types explored with the CK technique successfully evaluate the interpolation technique's performance and help select optimal models for each state. The results from Montana and Minnesota models accurately represent the states' traffic and population density. The Washington model had a few exceptions. However, the secondary attribute helped yield an accurate interpretation. Consequently, the impact of tourism, shopping, recreation centers, and possible transiting patterns throughout the state is worth exploring.
Details
Keywords
Mobile applications affect our everyday activities and have become more and more information centric. Effort estimation for mobile application is an essential factor to consider…
Abstract
Mobile applications affect our everyday activities and have become more and more information centric. Effort estimation for mobile application is an essential factor to consider in the development cycle. Due to feature complexities and size, effort estimation of mobile applications poses a continued challenge for developers. This paper attempts to adapt COSMIC Function Point and Unified Modeling Language (UML) techniques to estimate the size of a given mobile application. The COSMIC concepts capture data movements of the functional processes whereas the UML class analyzes them. We utilize the Use Case Diagrams, sequence diagrams and class diagrams for mapping the Function user requirements for sizing mobile applications. We further present a new size measurement technique; Unadjusted Mobile COSMIC Function points (UMCFP) to get the functional size of mobile application using Mobile Complex Factors as an input. In this study eight mobile applications were analyzed using UMCFP, Function Point Analysis and COSMIC Function Point. The results were compared with the actual size of previous Mobile application projects.
Details
Keywords
Dickson Chigariro and Njabulo Bruce Khumalo
This study aims to find out how the e-records management subject has been researched and tackled by researchers in the Eastern and Southern African Regional Branch of the…
Abstract
Purpose
This study aims to find out how the e-records management subject has been researched and tackled by researchers in the Eastern and Southern African Regional Branch of the International Council on Archives (ESARBICA).
Design/methodology/approach
This research paper applied a bibliometric survey, where a quantitative survey of the literature pertaining to the study of e-records management in the ESARBICA region, covering the period from 2000 to 2016, was conducted applying bibliometric methods. The survey aimed at providing descriptive data that cast a spotlight on the features and development of the e-records management base literature in the ESARBICA region.
Findings
The research data display a lamentable outlook in the contribution to the electronic records management body of knowledge from the ESARBICA region. Few research articles from professionals in the records and archives management are being published. These figures call for increased investments in electronic records management research by institutions in ESARBICA, as management of electronic content has become the centre of political and socio-economic development. Follow-up studies need to be done to counter limitations placed on this research paper. The findings show that there is under production of research publications in the ESARBICA region. The region only contributed 2 per cent of the total world output in the period under review and in the study of electronic records management from journals indexed by Scopus.
Research limitations/implications
A bibliometric study places researchers at the mercy of analysing incomplete information due to limitations of resources. The variance in use of terminology (key words) by authors in published research articles may entail some being left out in an analysis of articles the same subject matter. As much as due diligence was placed on using Boolean search methods to counter such limitations they are unavoidable. An interpretation of bibliometric or citation analysis research is subjective as some analysts may label results incomplete or unreliable; hence, this paper finds itself in the same predicament. Inability to access the Thompson Reuters Web of Science database left the authors with Scopus as the only option, as Google Scholar was overlooked due to difficulties of having to rely on third-party software for analysing its indexed content that are mostly inaccurate and or ambiguous.
Practical implications
The findings of this study help uncover areas in e-records management, which have been researched over the years, and identify the prominent e-records management researchers in the ESARBICA region.
Originality/value
A number of bibliometric studies have been conducted; however, none has been conducted to establish e-records management research trends in the ESRABICA region.
Details
Keywords
Santhosh Srinivas and Huigang Liang
While every firm is striving to embrace digital transformation (DT) to form new differentiating business capabilities, there are dark sides to such initiatives, and it is…
Abstract
Purpose
While every firm is striving to embrace digital transformation (DT) to form new differentiating business capabilities, there are dark sides to such initiatives, and it is essential to acknowledge, identify and address them. The purpose of this paper is to identify and emperically demonstrate the impact of such darksides of DT. While a firm's DT effort may have many dark sides, the authors identify data breaches as the most critical one and focus on proving their impact since it can inflict significant damage to the firm.
Design/methodology/approach
Through the lens of paradox theory, the authors argue that the DT efforts of a firm will lead to increased risk and severity of data breaches. The authors developed a one-of-a-kind longitudinal data set by combining data from multiple sources, including 3604 brands over a 10-year period, and employed a DT performance scorecard to evaluate a firm's DT effort across four key digital selling touchpoints: site, mobile, digital marketing and social media.
Findings
The findings of this study show that a firm's DT efforts pertaining to its mobile and digital marketing platforms significantly increase the likelihood and severity of a data breach event indicating that these two channels are most vulnerable and need heightened attention from firms. Furthermore, the findings suggest that the negative repercussions of some DT initiatives may be minimized as the firm becomes more innovative. The findings can help firms re-strategize their DT efforts by promoting security and also encouraging a balanced communication strategy.
Originality/value
This research is one of the first to identify, recognize and empirically illustrate the downsides of a DT effort that is otherwise thought to provide only benefits.
Details
Keywords
Gerd Hübscher, Verena Geist, Dagmar Auer, Nicole Hübscher and Josef Küng
Knowledge- and communication-intensive domains still long for a better support of creativity that considers legal requirements, compliance rules and administrative tasks as well…
Abstract
Purpose
Knowledge- and communication-intensive domains still long for a better support of creativity that considers legal requirements, compliance rules and administrative tasks as well, because current systems focus either on knowledge representation or business process management. The purpose of this paper is to discuss our model of integrated knowledge and business process representation and its presentation to users.
Design/methodology/approach
The authors follow a design science approach in the environment of patent prosecution, which is characterized by a highly standardized, legally prescribed process and individual knowledge study. Thus, the research is based on knowledge study, BPM, graph-based knowledge representation and user interface design. The authors iteratively designed and built a model and a prototype. To evaluate the approach, the authors used analytical proof of concept, real-world test scenarios and case studies in real-world settings, where the authors conducted observations and open interviews.
Findings
The authors designed a model and implemented a prototype for evolving and storing static and dynamic aspects of knowledge. The proposed solution leverages the flexibility of a graph-based model to enable open and not only continuously developing user-centered processes but also pre-defined ones. The authors further propose a user interface concept which supports users to benefit from the richness of the model but provides sufficient guidance.
Originality/value
The balanced integration of the data and task perspectives distinguishes the model significantly from other approaches such as BPM or knowledge graphs. The authors further provide a sophisticated user interface design, which allows the users to effectively and efficiently use the graph-based knowledge representation in their daily study.
Details
Keywords
Abstract
Details
Keywords
Sheunesu Zhou, Ayansola O. Ayandibu, Tendai Chimucheka and Mandla M. Masuku
This study evaluates the impact of government social protection interventions on households’ welfare in South Africa.
Abstract
Purpose
This study evaluates the impact of government social protection interventions on households’ welfare in South Africa.
Design/methodology/approach
The study uses survey data comprising 393 observations and the multinomial logistic regression technique to analyse the effect of government interventions on households’ welfare. For robustness purposes, a negative binomial regression model is also estimated whose results corroborate the main results from the multinomial regression model.
Findings
The study’s findings show that government economic interventions through social protection significantly reduce the likelihood of a decrease in household income or consumption. COVID-19 grant/social relief of distress grant, unemployment insurance, tax relief and job protection and creation are all significant in sustaining household income and consumption.
Practical implications
The findings have policy implications for social development. Specifically, the findings support the use of government social protection as a safety net for low-income groups in South Africa.
Originality/value
The study presents preliminary evidence on the effectiveness of several measures used to ameliorate the COVID-19-induced recession within the South African context.
Details
Keywords
How to obtain a list of the 100 largest scientific publishers sorted by journal count? Existing databases are unhelpful as each of them inhere biased omissions and data quality…
Abstract
Purpose
How to obtain a list of the 100 largest scientific publishers sorted by journal count? Existing databases are unhelpful as each of them inhere biased omissions and data quality flaws. This paper tries to fill this gap with an alternative approach.
Design/methodology/approach
The content coverages of Scopus, Publons, DOAJ and SherpaRomeo were first used to extract a preliminary list of publishers that supposedly possess at least 15 journals. Second, the publishers' websites were scraped to fetch their portfolios and, thus, their “true” journal counts.
Findings
The outcome is a list of the 100 largest publishers comprising 28.060 scholarly journals, with the largest publishing 3.763 journals, and the smallest carrying 76 titles. The usual “oligopoly” of major publishing companies leads the list, but it also contains 17 university presses from the Global South, and, surprisingly, 30 predatory publishers that together publish 4.517 journals.
Research limitations/implications
Additional data sources could be used to mitigate remaining biases; it is difficult to disambiguate publisher names and their imprints; and the dataset carries a non-uniform distribution, thus risking the omission of data points in the lower range.
Practical implications
The dataset can serve as a useful basis for comprehensive meta-scientific surveys on the publisher-level.
Originality/value
The catalogue can be deemed more inclusive and diverse than other ones because many of the publishers would have been overlooked if one had drawn from merely one or two sources. The list is freely accessible and invites regular updates. The approach used here (webscraping) has seldomly been used in meta-scientific surveys.
Details
Keywords
Edmund Baffoe-Twum, Eric Asa and Bright Awuku
Background: The annual average daily traffic (AADT) data from road segments are critical for roadway projects, especially with the decision-making processes about operations…
Abstract
Background: The annual average daily traffic (AADT) data from road segments are critical for roadway projects, especially with the decision-making processes about operations, travel demand, safety-performance evaluation, and maintenance. Regular updates help to determine traffic patterns for decision-making. Unfortunately, the luxury of having permanent recorders on all road segments, especially low-volume roads, is virtually impossible. Consequently, insufficient AADT information is acquired for planning and new developments. A growing number of statistical, mathematical, and machine-learning algorithms have helped estimate AADT data values accurately, to some extent, at both sampled and unsampled locations on low-volume roadways. In some cases, roads with no representative AADT data are resolved with information from roadways with similar traffic patterns.
Methods: This study adopted an integrative approach with a combined systematic literature review (SLR) and meta-analysis (MA) to identify and to evaluate the performance, the sources of error, and possible advantages and disadvantages of the techniques utilized most for estimating AADT data. As a result, an SLR of various peer-reviewed articles and reports was completed to answer four research questions.
Results: The study showed that the most frequent techniques utilized to estimate AADT data on low-volume roadways were regression, artificial neural-network techniques, travel-demand models, the traditional factor approach, and spatial interpolation techniques. These AADT data-estimating methods' performance was subjected to meta-analysis. Three studies were completed: R squared, root means square error, and mean absolute percentage error. The meta-analysis results indicated a mixed summary effect: 1. all studies were equal; 2. all studies were not comparable. However, the integrated qualitative and quantitative approach indicated that spatial-interpolation (Kriging) methods outperformed the others.
Conclusions: Spatial-interpolation methods may be selected over others to generate accurate AADT data by practitioners at all levels for decision making. Besides, the resulting cross-validation statistics give statistics like the other methods' performance measures.
Details