Search results

1 – 10 of over 2000
Article
Publication date: 25 June 2021

San Hay Mar Hay Mar Shwe, Nobuo Funabiki, Yan Watequlis Syaifudin, Phyu Phyu Tar, Htoo Htoo Sandi Kyaw, Hnin Aye Thant, Wen-Chung Kao, Nandar Win Min, Thandar Myint and Ei Ei Htet

This study aims to present the value trace problem (VTP) for Python programming self-study, by extending the works for Java programming learning assistant system. In total, 130…

Abstract

Purpose

This study aims to present the value trace problem (VTP) for Python programming self-study, by extending the works for Java programming learning assistant system. In total, 130 VTP instances are generated using Python codes in textbooks and websites that cover basic/advanced grammar topics, fundamental data structures and algorithms and two common library usages. Besides, assisting references on Python programming topics related to the VTP instances are introduced to assist novice learners in solving them efficiently.

Design/methodology/approach

PyPLAS offers the VTP to study grammar topics and library usage through code reading. A VTP instance asks a learner to trace the actual values of important variables or output messages in the given source code. The correctness of any answer is checked through string matching.

Findings

The applications to 48 undergraduate students in Myanmar and Indonesia confirm the validity of the proposal in Python programming self-studies by novice learners.

Originality/value

The applications to 48 undergraduate students in Myanmar and Indonesia confirm the validity of the proposal in Python programming self-studies by novice learners.

Details

International Journal of Web Information Systems, vol. 17 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 10 June 2021

Hande Yavuz

Python codes are developed for the versatile structural analysis on a 3 spar multi-cell box beam by means of idealization approach.

Abstract

Purpose

Python codes are developed for the versatile structural analysis on a 3 spar multi-cell box beam by means of idealization approach.

Design/methodology/approach

Shear flow distribution, stiffener loads, location of shear center and location of geometric center are computed via numpy module. Data visualization is performed by using Matplotlib module.

Findings

Python scripts are developed for the structural analysis of multi-cell box beams in lieu of long hand solutions. In-house developed python codes are made available to be used with finite element analysis for verification purposes.

Originality/value

The use of python scripts for the structural analysis provides prompt visualization, especially once dimensional variations are concerned in the frame of aircraft structural design. The developed python scripts would serve as a practical tool that is widely applicable to various multi-cell wing boxes for stiffness purposes. This would be further extended to the structural integrity problems to cover the effect of gaps and/or cut-outs in shear flow distribution in box-beams.

Details

Aircraft Engineering and Aerospace Technology, vol. 93 no. 5
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 4 September 2017

Jan-Niclas Walther, Michael Petsch and Dieter Kohlgrüber

The purpose of this paper is to present some of the key achievements. At DLR, a sophisticated interdisciplinary aircraft design process is being developed, using the CPACS data…

Abstract

Purpose

The purpose of this paper is to present some of the key achievements. At DLR, a sophisticated interdisciplinary aircraft design process is being developed, using the CPACS data format (Nagel et al., 2012; Scherer and Kohlgrüber, 2016) as a means of exchanging results. Within this process, TRAFUMO (Scherer et al., 2013) (transport aircraft fuselage model), built on ANSYS and the Python programming language, is the current tool for automatic generation and subsequent sizing of global finite element fuselage models. Recently, much effort has gone into improving the tool performance and opening up the modeling chain to further finite element solvers.

Design/methodology/approach

Much functionality has been shifted from specific routines in ANSYS to Python, including the automatic creation of global finite element models based on geometric and structural data from CPACS and the conversion of models between different finite element codes. Furthermore, a new method for modeling and interrogating geometries from CPACS using B-spline surfaces has been introduced.

Findings

Several new modules have been implemented independently with a well-defined central data format in place for storing and exchanging information, resulting in a highly extensible framework for working with finite element data. The new geometry description proves to be highly efficient while also improving the geometric accuracy.

Practical implications

The newly implemented modules provide the groundwork for a new all-Python model generation chain, which is more flexible at significantly improved runtimes. With the analysis being part of a larger multidisciplinary design optimization process, this enables exploration of much larger design spaces within a given timeframe.

Originality/value

In the presented paper, key features of the newly developed model generation chain are introduced. They enable the quick generation of global finite element models from CPACS for arbitrary solvers for the first time.

Details

Aircraft Engineering and Aerospace Technology, vol. 89 no. 5
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 21 March 2016

Mathew Miles

Many libraries have a need to develop their own data-driven web applications, but their technical staff often lacks the required specialized training – which includes knowledge of…

Abstract

Purpose

Many libraries have a need to develop their own data-driven web applications, but their technical staff often lacks the required specialized training – which includes knowledge of SQL, a web application language like PHP, JavaScript, CSS, and jQuery. The web2py framework greatly reduces the learning curve for creating data-driven websites by focussing on three main goals: ease of use; rapid development; and security. web2py follows a strict MVC framework where the controls and web templates are all written in pure Python. No additional templating language is required. The paper aims to discuss these issues.

Design/methodology/approach

There are many frameworks available for creating database-driven web applications. The author had used ColdFusion for many years but wanted to move to a more complete web framework which was also open source.

Findings

After evaluating a number of Python frameworks, web2py was found to provide the best combination of functionality and ease of use. This paper focusses on the strengths of web2py and not the specifics of evaluating the different frameworks.

Practical implications

Librarians who feel that they do not have the skills to create data-driven websites in other frameworks might find that they can develop them in web2py. It is a good web application framework to start with, which might also provide a gateway to other frameworks.

Originality/value

web2py is an open source framework that could have great benefit for those who may have struggled to create database-driven websites in other frameworks or languages.

Details

Library Hi Tech, vol. 34 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 13 September 2019

Collins Udanor and Chinatu C. Anyanwu

Hate speech in recent times has become a troubling development. It has different meanings to different people in different cultures. The anonymity and ubiquity of the social media…

2148

Abstract

Purpose

Hate speech in recent times has become a troubling development. It has different meanings to different people in different cultures. The anonymity and ubiquity of the social media provides a breeding ground for hate speech and makes combating it seems like a lost battle. However, what may constitute a hate speech in a cultural or religious neutral society may not be perceived as such in a polarized multi-cultural and multi-religious society like Nigeria. Defining hate speech, therefore, may be contextual. Hate speech in Nigeria may be perceived along ethnic, religious and political boundaries. The purpose of this paper is to check for the presence of hate speech in social media platforms like Twitter, and to what degree is hate speech permissible, if available? It also intends to find out what monitoring mechanisms the social media platforms like Facebook and Twitter have put in place to combat hate speech. Lexalytics is a term coined by the authors from the words lexical analytics for the purpose of opinion mining unstructured texts like tweets.

Design/methodology/approach

This research developed a Python software called polarized opinions sentiment analyzer (POSA), adopting an ego social network analytics technique in which an individual’s behavior is mined and described. POSA uses a customized Python N-Gram dictionary of local context-based terms that may be considered as hate terms. It then applied the Twitter API to stream tweets from popular and trending Nigerian Twitter handles in politics, ethnicity, religion, social activism, racism, etc., and filtered the tweets against the custom dictionary using unsupervised classification of the texts as either positive or negative sentiments. The outcome is visualized using tables, pie charts and word clouds. A similar implementation was also carried out using R-Studio codes and both results are compared and a t-test was applied to determine if there was a significant difference in the results. The research methodology can be classified as both qualitative and quantitative. Qualitative in terms of data classification, and quantitative in terms of being able to identify the results as either negative or positive from the computation of text to vector.

Findings

The findings from two sets of experiments on POSA and R are as follows: in the first experiment, the POSA software found that the Twitter handles analyzed contained between 33 and 55 percent hate contents, while the R results show hate contents ranging from 38 to 62 percent. Performing a t-test on both positive and negative scores for both POSA and R-studio, results reveal p-values of 0.389 and 0.289, respectively, on an α value of 0.05, implying that there is no significant difference in the results from POSA and R. During the second experiment performed on 11 local handles with 1,207 tweets, the authors deduce as follows: that the percentage of hate contents classified by POSA is 40 percent, while the percentage of hate contents classified by R is 51 percent. That the accuracy of hate speech classification predicted by POSA is 87 percent, while free speech is 86 percent. And the accuracy of hate speech classification predicted by R is 65 percent, while free speech is 74 percent. This study reveals that neither Twitter nor Facebook has an automated monitoring system for hate speech, and no benchmark is set to decide the level of hate contents allowed in a text. The monitoring is rather done by humans whose assessment is usually subjective and sometimes inconsistent.

Research limitations/implications

This study establishes the fact that hate speech is on the increase on social media. It also shows that hate mongers can actually be pinned down, with the contents of their messages. The POSA system can be used as a plug-in by Twitter to detect and stop hate speech on its platform. The study was limited to public Twitter handles only. N-grams are effective features for word-sense disambiguation, but when using N-grams, the feature vector could take on enormous proportions and in turn increasing sparsity of the feature vectors.

Practical implications

The findings of this study show that if urgent measures are not taken to combat hate speech there could be dare consequences, especially in highly polarized societies that are always heated up along religious and ethnic sentiments. On daily basis tempers are flaring in the social media over comments made by participants. This study has also demonstrated that it is possible to implement a technology that can track and terminate hate speech in a micro-blog like Twitter. This can also be extended to other social media platforms.

Social implications

This study will help to promote a more positive society, ensuring the social media is positively utilized to the benefit of mankind.

Originality/value

The findings can be used by social media companies to monitor user behaviors, and pin hate crimes to specific persons. Governments and law enforcement bodies can also use the POSA application to track down hate peddlers.

Details

Data Technologies and Applications, vol. 53 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 1 February 2022

Yasser Mater, Mohamed Kamel, Ahmed Karam and Emad Bakhoum

Utilization of sustainable materials is a global demand in the construction industry. Hence, this study aims to integrate waste management and artificial intelligence by…

Abstract

Purpose

Utilization of sustainable materials is a global demand in the construction industry. Hence, this study aims to integrate waste management and artificial intelligence by developing an artificial neural network (ANN) model to predict the compressive strength of green concrete. The proposed model allows the use of recycled coarse aggregate (RCA), recycled fine aggregate (RFA) and fly ash (FA) as partial replacements of concrete constituents.

Design/methodology/approach

The model is constructed, trained and validated using python through a set of experimental data collected from the literature. The model’s architecture comprises an input layer containing seven neurons representing concrete constituents and two neurons as the output layer to represent the 7- and 28-days compressive strength. The model showed high performance through multiple metrics, including mean squared error (MSE) of 2.41 and 2.00 for training and testing data sets, respectively.

Findings

Results showed that cement replacement with 10% FA causes a slight reduction up to 9% in the compressive strength, especially at early ages. Moreover, a decrease of nearly 40% in the 28-days compressive strength was noticed when replacing fine aggregate with 25% RFA.

Research limitations/implications

The research is limited to normal compressive strength of green concrete with a range of 25 to 40 MPa.

Practical implications

The developed model is designed in a flexible and user-friendly manner to be able to contribute to the sustainable development of the construction industry by saving time, effort and cost consumed in the experimental testing of materials.

Social implications

Green concrete containing wastes can solve several environmental problems, such as waste disposal problems, depletion of natural resources and energy consumption.

Originality/value

This research proposes a machine learning prediction model using the Python programming language to estimate the compressive strength of a green concrete mix that includes construction and demolition waste and FA. The ANN model is used to create three guidance charts through a parametric study to obtain the compressive strength of green concrete using RCA, RFA and FA replacements.

Details

Construction Innovation , vol. 23 no. 2
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 16 February 2015

Jodi Kearns

84

Abstract

Details

Reference Reviews, vol. 29 no. 2
Type: Research Article
ISSN: 0950-4125

Keywords

Article
Publication date: 7 August 2017

Sathyavikasini Kalimuthu and Vijaya Vijayakumar

Diagnosing genetic neuromuscular disorder such as muscular dystrophy is complicated when the imperfection occurs while splicing. This paper aims in predicting the type of muscular…

Abstract

Purpose

Diagnosing genetic neuromuscular disorder such as muscular dystrophy is complicated when the imperfection occurs while splicing. This paper aims in predicting the type of muscular dystrophy from the gene sequences by extracting the well-defined descriptors related to splicing mutations. An automatic model is built to classify the disease through pattern recognition techniques coded in python using scikit-learn framework.

Design/methodology/approach

In this paper, the cloned gene sequences are synthesized based on the mutation position and its location on the chromosome by using the positional cloning approach. For instance, in the human gene mutational database (HGMD), the mutational information for splicing mutation is specified as IVS1-5 T > G indicates (IVS - intervening sequence or introns), first intron and five nucleotides before the consensus intron site AG, where the variant occurs in nucleotide G altered to T. IVS (+ve) denotes forward strand 3′– positive numbers from G of donor site invariant and IVS (−ve) denotes backward strand 5′ – negative numbers starting from G of acceptor site. The key idea in this paper is to spot out discriminative descriptors from diseased gene sequences based on splicing variants and to provide an effective machine learning solution for predicting the type of muscular dystrophy disease with the splicing mutations. Multi-class classification is worked out through data modeling of gene sequences. The synthetic mutational gene sequences are created, as the diseased gene sequences are not readily obtainable for this intricate disease. Positional cloning approach supports in generating disease gene sequences based on mutational information acquired from HGMD. SNP-, gene- and exon-based discriminative features are identified and used to train the model. An eminent muscular dystrophy disease prediction model is built using supervised learning techniques in scikit-learn environment. The data frame is built with the extracted features as numpy array. The data are normalized by transforming the feature values into the range between 0 and 1 aid in scaling the input attributes for a model. Naïve Bayes, decision tree, K-nearest neighbor and SVM learned models are developed using python library framework in scikit-learn.

Findings

To the best knowledge of authors, this is the foremost pattern recognition model, to classify muscular dystrophy disease pertaining to splicing mutations. Certain essential SNP-, gene- and exon-based descriptors related to splicing mutations are proposed and extracted from the cloned gene sequences. An eminent model is built using statistical learning technique through scikit-learn in the anaconda framework. This paper also deliberates the results of statistical learning carried out with the same set of gene sequences with synonymous and non-synonymous mutational descriptors.

Research limitations/implications

The data frame is built with the Numpy array. Normalizing the data by transforming the feature values into the range between 0 and 1 aid in scaling the input attributes for a model. Naïve Bayes, decision tree, K-nearest neighbor and SVM learned models are developed using python library framework in scikit-learn. While learning the SVM model, the cost, gamma and kernel parameters are tuned to attain good results. Scoring parameters of the classifiers are evaluated using tenfold cross-validation using metric functions of scikit-learn library. Results of the disease identification model based on non-synonymous, synonymous and splicing mutations were analyzed.

Practical implications

Certain essential SNP-, gene- and exon-based descriptors related to splicing mutations are proposed and extracted from the cloned gene sequences. An eminent model is built using statistical learning technique through scikit-learn in the anaconda framework. The performance of the classifiers are increased by using different estimators from the scikit-learn library. Several types of mutations such as missense, non-sense and silent mutations are also considered to build models through statistical learning technique and their results are analyzed.

Originality/value

To the best knowledge of authors, this is the foremost pattern recognition model, to classify muscular dystrophy disease pertaining to splicing mutations.

Details

World Journal of Engineering, vol. 14 no. 4
Type: Research Article
ISSN: 1708-5284

Keywords

Article
Publication date: 21 March 2019

Filipe Monteiro Ribeiro, J. Norberto Pires and Amin S. Azar

Additive manufacturing (AM) technologies have recently turned into a mainstream production method in many industries. The adoption of new manufacturing scenarios led to the…

1750

Abstract

Purpose

Additive manufacturing (AM) technologies have recently turned into a mainstream production method in many industries. The adoption of new manufacturing scenarios led to the necessity of cross-disciplinary developments by combining several fields such as materials, robotics and computer programming. This paper aims to describe an innovative solution for implementing robotic simulation for AM experiments using a robot cell, which is controlled through a system control application (SCA).

Design/methodology/approach

For this purpose, the emulation of the AM tasks was executed by creating a robot working station in RoboDK software, which is responsible for the automatic administration of additive tasks. This is done by interpreting gcode from the Slic3r software environment. Posteriorly, all the SCA and relevant graphical user interface (GUI) were developed in Python to control the AM tasks from the RoboDK software environment. As an extra feature, Slic3r was embedded in the SCA to enable the generation of gcode automatically, without using the original user interface of the software. To sum up, this paper adds a new insight in the field of AM as it demonstrates the possibility of simulating and controlling AM tasks into a robot station.

Findings

The purpose of this paper is to contribute to the AM field by introducing and implementing an SCA capable of executing/simulating robotic AM tasks. It also shows how an advanced user can integrate advanced simulation technologies with a real AM system, creating in this way a powerful system for R&D and operational manufacturing tasks. As demonstrated, the creation of the AM environment was only possible by using the RoboDk software that allows the creation of a robot working station and its main operations.

Originality/value

Although the AM simulation was satisfactory, it was necessary to develop an SCA capable of controlling the whole simulation through simple commands instructed by users. As described in this work, the development of SCA was entirely implemented in Python by using official libraries. The solution was presented in the form of an application capable of controlling the AM operation through a server/client socket connection. In summary, a system architecture that is capable of controlling an AM simulation was presented. Moreover, implementation of commands in a simple GUI was shown as a step forward in implementation of modern AM process controls.

Details

Industrial Robot: the international journal of robotics research and application, vol. 46 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Open Access
Article
Publication date: 28 September 2023

Jonas Bundschuh, M. Greta Ruppert and Yvonne Späck-Leigsnering

The purpose of this paper is to present the freely available finite element simulation software Pyrit.

Abstract

Purpose

The purpose of this paper is to present the freely available finite element simulation software Pyrit.

Design/methodology/approach

In a first step, the design principles and the objective of the software project are defined. Then, the software’s structure is established: The software is organized in packages for which an overview is given. The structure is based on the typical steps of a simulation workflow, i.e., problem definition, problem-solving and post-processing. State-of-the-art software engineering principles are applied to ensure a high code quality at all times. Finally, the modeling and simulation workflow of Pyrit is demonstrated by three examples.

Findings

Pyrit is a field simulation software based on the finite element method written in Python to solve coupled systems of partial differential equations. It is designed as a modular software that is easily modifiable and extendable. The framework can, therefore, be adapted to various activities, i.e., research, education and industry collaboration.

Research limitations/implications

The focus of Pyrit are static and quasistatic electromagnetic problems as well as (coupled) heat conduction problems. It allows for both time domain and frequency domain simulations.

Originality/value

In research, problem-specific modifications and direct access to the source code of simulation tools are essential. With Pyrit, the authors present a computationally efficient and platform-independent simulation software for various electromagnetic and thermal field problems.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 42 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

1 – 10 of over 2000