Search results

1 – 10 of over 2000
Article
Publication date: 26 June 2019

Camille Desrochers, Pierre-Majorique Léger, Marc Fredette, Seyedmohammadmahdi Mirhoseini and Sylvain Sénécal

Online grocery shopping possesses characteristics that can make it more difficult than regular online shopping. There are numerous buying decisions to make each shopping session…

1602

Abstract

Purpose

Online grocery shopping possesses characteristics that can make it more difficult than regular online shopping. There are numerous buying decisions to make each shopping session, there are large ranges of product types to choose from and there is varied arithmetical complexity. The purpose of this paper is to examine how such characteristics influence the attitude of consumers toward online grocery shopping websites.

Design/methodology/approach

The authors hypothesized that the product type (search or experience product), the task arithmetic complexity, and the attention and cognitive load associated with browsing through product pictures have an effect on the attitude of online shoppers toward these websites. To test the hypotheses, 31 subjects participated in a within-subject laboratory experiment.

Findings

The results suggest that visual attention to product pictures has a positive effect on the attitude of online shoppers toward a website when they are shopping for experience goods, but that it has a negative effect on their attitude toward a website when the task arithmetic complexity is greater. They also suggest that the cognitive load associated with browsing through product pictures has a negative effect on the attitude of online shoppers toward a website when they are shopping for experience goods, and that greater cognitive load variation has a positive effect on their attitude toward a website when arithmetic task complexity is greater.

Practical implications

When designing online grocery websites, providing clear single unit quantities with pictures corresponding to the sales unit could help establish a clear baseline on which consumers can work out their quantity requirements. For decisions involving experience goods, product pictures may act as an important complementary information source and may even be more diagnostic than text description.

Originality/value

Results reinforce the relevance of enriching the study of self-reported measures of the user experience on e-commerce sites with automatic measures.

Article
Publication date: 15 February 2022

Gade Mary Swarna Latha and S. Rooban

In this research work, brief quantum-dot cellular automata (QCA) concepts are discussed through arithmetic and logic units. This work is most useful for nanoelectronic…

92

Abstract

Purpose

In this research work, brief quantum-dot cellular automata (QCA) concepts are discussed through arithmetic and logic units. This work is most useful for nanoelectronic applications, VLSI industry mainly depends on this type of fault-tolerant QCA based arithmetic logic unit (ALU) design. The ALU design is mainly depending on set instructions and rules; these are maintained through low-power ultra-functional tricks only possible with QCA-based reversible arithmetic and logic unit for nanoelectronics. The main objective of this investigation is to design an ultra-low power and ultra-high-speed ALU design with QCA technology. The following QCA method has been implemented through reversible logic.

Design/methodology/approach

QCA logic is the main and critical condition for realizing NANO-scale design that delivers considerably fast integrate module, effective performable computation and is less energy efficiency at the nano-scale (QCA). Processors need an ALU in order to process and calculate data. Fault-resistant ALU in QCA technology utilizing reverse logic is the primary objective of this study. There are now two sections, i.e. reversible ALU (RAU), logical (LAU) and arithmetical (RAU).

Findings

A reversible 2 × 1 multiplexer based on the Fredkin gate (FRG) was developed to allow users to choose between arithmetic and logical operations. QCA full adders are also implemented to improve arithmetic operations' performance. The ALU is built using reversible logic gates that are fault-tolerant.

Originality/value

In contrast to earlier research, the suggested reversible multilayered ALU with reversible QCA operation is imported. The 8- and 16-bit ALU, as well as logical unit functioning, is designed through fewer gates, constant inputs and outputs. This implementation is designed on the Mentor Graphics QCA tool and verifies all functionalities.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 16 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 1 June 1999

Hooman Estelami

Research in marketing indicates that consumers may be sensitive to the final digits of prices. For example, despite being substantively equivalent, a price such as $199 may create…

1981

Abstract

Research in marketing indicates that consumers may be sensitive to the final digits of prices. For example, despite being substantively equivalent, a price such as $199 may create more favorable price perceptions than $200. However, existing research has primarily focused on the effects of price endings in the context of uni‐dimensional prices – prices consisting of a single number. Advertised prices in the marketplace are often multi‐dimensional, consisting of numerous price dimensions. In such pricing contexts, price endings may influence consumers’ ability to conduct the arithmetic required to compute the total advertised price. Examines the effect of various price ending strategies on consumers’ computational efforts. The findings indicate that the more commonly exercised price ending strategies tend to result in prices that are the most difficult for consumers to evaluate.

Details

Journal of Product & Brand Management, vol. 8 no. 3
Type: Research Article
ISSN: 1061-0421

Keywords

Article
Publication date: 1 September 2003

Hooman Estelami

A significant amount of research in pricing has focused on price as a unidimensional construct – one consisting of a single number (e.g. $1.99). However, the evolving marketing…

2246

Abstract

A significant amount of research in pricing has focused on price as a unidimensional construct – one consisting of a single number (e.g. $1.99). However, the evolving marketing environment, combined with notable growth in services and goods categories that require the communication of complex price information, has led to the use of multi‐dimensional prices. Multi‐dimensional prices consist of multiple numbers (e.g. $199 a month for 36 months) and as a result require the consumer to carry out specific mental computations to determine the cost of the offer. In this paper, empirical evidence on consumer difficulty in evaluating multi‐dimensional prices is examined. Then the strategic impact of such difficulties for pricing managers as well as regulators is examined. The paper concludes with a discussion of the implications of multi‐dimensional pricing on past research findings, and reflects on existing understanding of consumer response to price.

Details

Journal of Product & Brand Management, vol. 12 no. 5
Type: Research Article
ISSN: 1061-0421

Keywords

Article
Publication date: 21 August 2019

Hiren K. Mewada and Jitendra Chaudhari

The digital down converter (DDC) is a principal component in modern communication systems. The DDC process traditionally entails quadrature down conversion, bandwidth reducing…

Abstract

Purpose

The digital down converter (DDC) is a principal component in modern communication systems. The DDC process traditionally entails quadrature down conversion, bandwidth reducing filters and commensurate sample rate reduction. To avoid group delay, distortion linear phase FIR filters are used in the DDC. The filter performance specifications related to deep stopband attenuation, small in-band ripple and narrow transition bandwidth lead to filters with a large number of coefficients. To reduce the computational workload of the filtering process, filtering is often performed as a two-stage process, the first stage being a down sampling Hoegenauer (or cascade-integrated comb) filter and a reduced sample rate FIR filter. An alternative option is an M-Path polyphase partition of a band cantered FIR filter. Even though IIR filters offer reduced workload to implement a specific filtering task, the authors avoid using them because of their poor group delay characteristics. This paper aims to propose the design of M-path, approximately linear phase IIR filters as an alternative option to the M-path FIR filter.

Design/methodology/approach

Two filter designs are presented in the paper. The first approach uses linear phase IIR low pass structure to reduce the filter’s coefficient. Whereas the second approach uses multipath polyphase structure to design approximately linear phase IIR filter in DDC.

Findings

The authors have compared the performance and workload of the proposed polyphase structured IIR filters with state-of-the-art filter design used in DDC. The proposed design is seen to satisfy tight design specification with a significant reduction in arithmetic operations and required power consumption.

Originality/value

The proposed design is an alternate solution to the M-path polyphase FIR filter offering very less number of coefficients in the filter design. Proposed DDC using polyphase structured IIR filter satisfies the requirement of linear phase with the least number of computation cost in comparison with other DDC structure.

Details

Circuit World, vol. 45 no. 3
Type: Research Article
ISSN: 0305-6120

Keywords

Book part
Publication date: 24 March 2006

Ngai Hang Chan and Wilfredo Palma

Since the seminal works by Granger and Joyeux (1980) and Hosking (1981), estimations of long-memory time series models have been receiving considerable attention and a number of…

Abstract

Since the seminal works by Granger and Joyeux (1980) and Hosking (1981), estimations of long-memory time series models have been receiving considerable attention and a number of parameter estimation procedures have been proposed. This paper gives an overview of this plethora of methodologies with special focus on likelihood-based techniques. Broadly speaking, likelihood-based techniques can be classified into the following categories: the exact maximum likelihood (ML) estimation (Sowell, 1992; Dahlhaus, 1989), ML estimates based on autoregressive approximations (Granger & Joyeux, 1980; Li & McLeod, 1986), Whittle estimates (Fox & Taqqu, 1986; Giraitis & Surgailis, 1990), Whittle estimates with autoregressive truncation (Beran, 1994a), approximate estimates based on the Durbin–Levinson algorithm (Haslett & Raftery, 1989), state-space-based maximum likelihood estimates for ARFIMA models (Chan & Palma, 1998), and estimation of stochastic volatility models (Ghysels, Harvey, & Renault, 1996; Breidt, Crato, & de Lima, 1998; Chan & Petris, 2000) among others. Given the diversified applications of these techniques in different areas, this review aims at providing a succinct survey of these methodologies as well as an overview of important related problems such as the ML estimation with missing data (Palma & Chan, 1997), influence of subsets of observations on estimates and the estimation of seasonal long-memory models (Palma & Chan, 2005). Performances and asymptotic properties of these techniques are compared and examined. Inter-connections and finite sample performances among these procedures are studied. Finally, applications to financial time series of these methodologies are discussed.

Details

Econometric Analysis of Financial and Economic Time Series
Type: Book
ISBN: 978-1-84950-388-4

Article
Publication date: 15 October 2019

Yicong Gao, Chuan He, Bing Zheng, Hao Zheng and Jianrong Tan

Complexity is the main challenge for present and future manufacturers. Assembly complexity heavily affects a product’s final quality in the fully automated assembly system. This…

Abstract

Purpose

Complexity is the main challenge for present and future manufacturers. Assembly complexity heavily affects a product’s final quality in the fully automated assembly system. This paper aims to propose a new method to assess the complexity of modern automated assembly system at the assembly design stage with respect to the characteristics of both manufacturing system and each single component to be mounted. Aiming at validating the predictive model, a regression model is additionally presented to estimate the statistic relationship between the real assembly defect rate and predicted complexity of the fully automated assembly system.

Design/methodology/approach

The research herein extends the S. N. Samy and H. A. ElMaraghy’s model and seeks to redefine the predictive model using fuzzy evaluation against a fully automated assembly process at the assembly design stages. As the evaluation based on the deterministic scale with accurate crisp number can hardly reflect the uncertainty of the judgement, fuzzy linguistic variables are used to measure the interaction among influence factors. A dependency matrix is proposed to estimate the assembly complexity with respect to the interactions between mechanic design, electric design and process factors and main functions of assembly system. Furthermore, a complexity attributes matrix of single part is presented, to map the relationship between all individual parts to be mounted and three major factors mentioned in the dependency matrix.

Findings

The new proposed model presents a formal quantification to predict assembly complexity. It clarifies that how the attributes of assembly system and product components complicate the assembly process and in turn influence the manufacturing performance. A center bolt valve in the camshaft of continue variable valve timing is used to demonstrate the application of the developed methodology in this study.

Originality/value

This paper presents a developed method, which can be used to improve the design solution of assembly concept and optimize the process flow with the least complexity.

Details

Assembly Automation, vol. 39 no. 5
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 20 May 2019

Q.X. Liu, J.K. Liu and Y.M. Chen

A nonclassical method, usually called memory-free approach, has shown promising potential to release arithmetic complexity and meets high memory-storage requirements in solving…

Abstract

Purpose

A nonclassical method, usually called memory-free approach, has shown promising potential to release arithmetic complexity and meets high memory-storage requirements in solving fractional differential equations. Though many successful applications indicate the validity and effectiveness of memory-free methods, it has been much less understood in the rigorous theoretical basis. This study aims to focus on the theoretical basis of the memory-free Yuan–Agrawal (YA) method [Journal of Vibration and Acoustics 124 (2002), pp. 321-324].

Design/methodology/approach

Mathematically, the YA method is based on the validity of two fundamental procedures. The first is to reverse the integration order of an improper quadrature deduced from the Caputo-type fractional derivative. And, the second concerns the passage to the limit under the integral sign of the improper quadrature.

Findings

Though it suffices to verify the integration order reversibility, the uniform convergence of the improper integral is proved to be false. Alternatively, this paper proves that the integration order can still be reversed, as the target solution can be expanded as Taylor series on [0, ∞). Once the integration order is reversed, the paper presents a sufficient condition for the passage to the limit under the integral sign such that the target solution is continuous on [0, ∞). Both positive and counter examples are presented to illustrate and validate the theoretical analysis results.

Originality/value

This study presents some useful results for the real performance for the YA and some similar memory-free approaches. In addition, it opens a theoretical question on sufficient and necessary conditions, if any, for the validity of memory-free approaches.

Details

Engineering Computations, vol. 36 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Book part
Publication date: 10 June 2009

Craig Emby

The evaluation of competing hypotheses is an essential aspect of the audit process. The method of evaluation and re-evaluation may have implications for both efficiency and…

Abstract

The evaluation of competing hypotheses is an essential aspect of the audit process. The method of evaluation and re-evaluation may have implications for both efficiency and effectiveness. This paper presents the results of a field experiment using a case study set in the context of a fraud investigation in which practicing auditors were required to engage in multiple hypothesis probability estimation and revision regarding the perpetrator of the fraud. The experiment examined the effect of two different methods of facilitating multiple hypothesis probability estimation and revision consistent with the completeness and complementarity norms of probability theory as it applies to the independence versus dependence of competing hypotheses and with the prescriptions of Bayes' Theorem. The first method was to have participants use linear probability elicitation scales and receive prior tutoring in probability theory emphasizing the axioms of completeness and complementarity. The second method was to provide a graphical decision aid, without prior tutoring, to aid the participants in expressing their responses. A third condition in which participants used linear probability elicitation scales but received no tutoring in probability theory, provided a benchmark against which to assess the effects of the two treatments.

Participants receiving prior tutoring in probability theory and using linear probability elicitation scales complied in their estimations and revisions with the probability axioms of completeness and complementarity. However, they engaged in frequent violations of the normative probability model and of Bayes' Theorem. They did not distribute changes in the probability of the target hypothesis to the nontarget hypotheses, and they engaged in “eliminations and resuscitations” whereby they eliminated a suspect by assigning a zero probability to that suspect at an intermediate iteration and resuscitated that suspect by reassigning him or her a positive probability at a later iteration. The participants using the graphical decision aids, by construction, did not violate the probability axioms of completeness and complementarity. However, with no imposed constraints, the patterns of their revisions were different. When they revised the probability of the target hypothesis, they revised the probabilities of the nontarget hypotheses. They did not engage in eliminations and resuscitations. These patterns are more consistent with the norms of probability theory and with Bayes' Theorem. Possible explanations of this phenomenon are proposed and discussed, including implications for audit practice and future research.

Details

Advances in Accounting Behavioral Research
Type: Book
ISBN: 978-1-84855-739-0

Article
Publication date: 6 July 2015

R C Mittal and Amit Tripathi

The purpose of this paper is to develop an efficient numerical scheme for non-linear two-dimensional (2D) parabolic partial differential equations using modified bi-cubic B-spline…

Abstract

Purpose

The purpose of this paper is to develop an efficient numerical scheme for non-linear two-dimensional (2D) parabolic partial differential equations using modified bi-cubic B-spline functions. As a test case, method has been applied successfully to 2D Burgers equations.

Design/methodology/approach

The scheme is based on collocation of modified bi-cubic B-Spline functions. The authors used these functions for space variable and for its derivatives. Collocation form of the partial differential equation results into system of first-order ordinary differential equations (ODEs). The obtained system of ODEs has been solved by strong stability preserving Runge-Kutta method. The computational complexity of the method is O(p log(p)), where p denotes total number of mesh points.

Findings

Obtained numerical solutions are better than those available in literature. Ease of implementation and very small size of computational work are two major advantages of the present method. Moreover, this method provides approximate solutions not only at the grid points but also at any point in the solution domain.

Originality/value

First time, modified bi-cubic B-spline functions have been applied to non-linear 2D parabolic partial differential equations. Efficiency of the proposed method has been confirmed with numerical experiments. The authors conclude that the method provides convergent approximations and handles the equations very well in different cases.

1 – 10 of over 2000