Papers by Rosziati Ibrahim

Evolution in Electrical and Electronic Engineering, May 19, 2021
In this research, the main objective is to develop a system that can detect the paddy leaves dise... more In this research, the main objective is to develop a system that can detect the paddy leaves disease namely Brown Spot Disease (BS) and Narrow Brown Spot Disease (NBS). The idea of this paper is to develop a technique that capable to examine the image of plant leaf by using color slicing technique and classify the type of paddy leaves disease. Early detection of paddy leaf disease will avoid the production of low quality of rice. It is also important as to ensure a high quality of paddy plant. The methodology involves image acquisition, pre-processing, thresholding process, edge detection, color slicing, masking and analysis of the paddy leaves disease. All the paddy samples is going through the RBG calculation and it is processed with the color slicing technique for the paddy disease classification. Out of 37 sample paddy leaves images used, 33 of them or 89% are correctly detect the desired disease.

In this paper, a new algorithm using wavelet properties to compress an image is proposed. This al... more In this paper, a new algorithm using wavelet properties to compress an image is proposed. This algorithm concern on reducing the wavelet coefficients produced by the Discrete Wavelet Transform (DWT) process. The proposed algorithm start with calculating the threshold value by using the proposed threshold value estimator at wavelet detail subbands (Diagonal, Vertical and Horizontal subband). This proposed algorithm will estimate the suitable threshold value for each individual subband. The calculated threshold values are then applied to its' respective subband. The coefficient with a lower value than the calculated threshold will be discarded while the rest are retained. The novelty of the proposed method is it use the principle of the standard deviation method of deriving the threshold value estimator equation. Experiments show that the proposed method can effectively remove a large amount of unnecessary wavelet coefficient with a higher Peak Signal to Noise Ratio (PSNR) and compression ratio as well as shorter elapse time.
Malaysia University Conference Engineering Technology, Oct 11, 2014
Thresholding is a process of shrinking the small absolute coefficients value while retaining the ... more Thresholding is a process of shrinking the small absolute coefficients value while retaining the large absolute coefficient value. It will produce finer reconstruct signal. Since this method is taking the condition that the amplitude of wavelet transform coefficients signals are much larger than noises, so the unconsidered noise will be removed while holding the significant signal. This paper examine several thresholding methods namely VisuShrink (Hard Threshold), VisuShrink (Soft Threshold), BayesShrink, OTW SURE-LET and NeighShrink SURE. These five methods are implemented on standard test images and medical images to perceive its' different performance based on the Peak Signal-to-Noise Ratio (PSNR) value.

International Journal of Advanced Computer Science and Applications, 2021
The techniques associated with the Test Case Prioritization (TCP) are used to reduce the cost of ... more The techniques associated with the Test Case Prioritization (TCP) are used to reduce the cost of regression testing to achieve the objectives that the modifications in the target code would not impact the functionality of updated software. The effectiveness of the TCP is measured based on the cost, the code coverage, and fault detection ability. The regression testing techniques proposed so far are focusing on one or two effectiveness parameters. In this paper, we presented a state-of-art review of the approaches used in regression testing in detail. The second objective is to combine these effective adequacy measures into a single or multi-objective TCP task. This systematic literature review is conducted to identify the state-of-the-art research in regression TCP from 2007 to 2020. The research identifies fifty-two (52) relevant studies that were focusing on these three selection parameters to justify their findings. The results reveal that there were six families of regression TCP in which meta-heuristic regression TCP were reported in 38% and generic regression TCP techniques in 31%. The parameters used as prioritization criteria were cost, code coverage, and fault detection ability. The code coverage is reported by 38%, cost in 17%, and cost and code coverage in 31%. There were three sources for datasets were identified named Software artefact Infrastructure Repository (SIR), Apache Software Foundation, and Git Hub. The measurement and metrics used to validate the effectiveness are inclusiveness, precision, recall, and retest-all.

Scientific Programming, Jun 24, 2021
Modified source code validation is done by regression testing. In regression testing, the time an... more Modified source code validation is done by regression testing. In regression testing, the time and resources are limited, in which we have to select the minimal test cases from test suites to reduce execution time. e test case minimization process deals with the optimization of the regression testing by removing redundant test cases or prioritizing the test cases. is study proposed a test case prioritization approach based on multiobjective particle swarm optimization (MOPSO) by considering minimum execution time, maximum fault detection ability, and maximum code coverage. e MOPSO algorithm is used for the prioritization of test cases with parameters including execution time, fault detection ability, and code coverage. ree datasets are selected to evaluate the proposed MOPSO technique including TreeDataStructure, JodaTime, and Triangle. e proposed MOPSO is compared with the no ordering, reverse ordering, and random ordering technique for evaluating the effectiveness. e higher values of results represent the more effectiveness and the efficiency of the proposed MOPSO as compared to other approaches for TreeDa-taStructure, JodaTime, and Triangle datasets. e result is presented to 100-index mode relevant from low to high values; after that, test cases are prioritized. e experiment is conducted on three open-source java applications and evaluated using metrics inclusiveness, precision, and size reduction of a matrix of the test suite. e results revealed that all scenarios performed well in acceptable mode, and the technique is 17% to 86% more effective in terms of inclusiveness, 33% to 85% more effective in terms of precision, and 17% minimum to 86% maximum in size reduction of metrics.

Computers, Materials & Continua, 2022
Plant disease classification based on digital pictures is challenging. Machine learning approache... more Plant disease classification based on digital pictures is challenging. Machine learning approaches and plant image categorization technologies such as deep learning have been utilized to recognize, identify, and diagnose plant diseases in the previous decade. Increasing the yield quantity and quality of rice forming is an important cause for the paddy production countries. However, some diseases that are blocking the improvement in paddy production are considered as an ominous threat. Convolution Neural Network (CNN) has shown a remarkable performance in solving the early detection of paddy leaf diseases based on its images in the fast-growing era of science and technology. Nevertheless, the significant CNN architectures construction is dependent on expertise in a neural network and domain knowledge. This approach is time-consuming, and high computational resources are mandatory. In this research, we propose a novel method based on Mutant Particle swarm optimization (MUT-PSO) Algorithms to search for an optimum CNN architecture for Paddy leaf disease classification. Experimentation results show that Mutant Particle swarm optimization Convolution Neural Network (MUTPSO-CNN ) can find optimum CNN architecture that offers better performance than existing hand-crafted CNN architectures in terms of accuracy, precision/recall, and execution time.

International Journal of Advanced Computer Science and Applications, 2020
Improving the quality and quantity of paddy production is very important since rice is the most c... more Improving the quality and quantity of paddy production is very important since rice is the most consumed staple food for billion people around the world. Early detection of the paddy diseases and pests at different stages of growth is very crucial in paddy production. However, the current manual method in detecting and classifying the paddy diseases and pests requires a very knowledgeable farmer and time consuming. Thus, this study attempts to utilize an effective image processing and machine learning technique to detect and classify the paddy diseases and pests more accurately and less time processing. To accomplish this study, 3355 images comprises of 4 classes paddy images which are healthy, brown spot, leaf blast, and hispa was used. Then the proposed five layers of CNN technique is used to classify the images. The result shows that the proposed CNN technique is outperform and achieved the accuracy rate up to 93% as compared to other state-of-art comparative models.

Journal of Telecommunication, Electronic and Computer Engineering, 2017
Over recent years, rapid growth of smartphone technology and capabilities makes it an important t... more Over recent years, rapid growth of smartphone technology and capabilities makes it an important tool in our daily activities. Despite increasing processing power and capabilities as well as decreasing price, these consumer smartphones are still limited in term of batteries capacity. The heterogeneity properties of these devices, subscribed network as well as its users also lead to mismatch problem. Usage in power-hungry multimedia applications such as streaming video players and 3D games, the limited battery capacity motivates smartphone energy aware content adaptation research to address these problems. This paper present experiments of energy consumption of video streaming in various video encoding properties as well as different network scenarios. The result of the experiments shows that energy savings up to 40% can be achieved by using different encoding property.
Journal of Telecommunication, Electronic and Computer Engineering, 2018
With the proliferation of readily available image content, image compression has become a topic o... more With the proliferation of readily available image content, image compression has become a topic of considerable importance. As, rapidly increase of digital imaging demand, storage capability aspect should be considered. Therefore, image compression refers to reducing the size of image for minimizing storage without harming the image quality. Thus, an appropriate technique is needed for image compression for saving capacity as well as not losing valuable information. This paper consolidates literature whose characteristics have focused on image compression, thresholding algorithms, quantization algorithms. Later, related research on these areas are presented.

Rice Husk Ash (RHA) as sand replacement and filler of Foamed Concrete (FC) has contributed to inc... more Rice Husk Ash (RHA) as sand replacement and filler of Foamed Concrete (FC) has contributed to increase the strength. FC with RHA has increased the slab resistant of impact loading. When RHA granulate filled the FC porous, it would delay the collapsing of cell porous due to the increasing strain of porous walls. The RHA granulate increased the elasticity of the porous walls of FC. Other than that, the walls porous would be more plastic when it was subjected to compressive stress that generated by impact loading. The impact test conducted an instrument drop-weight impact tower to generate various impact velocities of non-deformable impactor on slab of FC and FC with RHA. Results show that FC with RHA created the crater without fragments. Means while, FC clearly create radial crack and fragments within the crater field. However, both slab materials did not generate spalling nor scabbing upon impact and the influence of porosity produces only local damage due to the mechanism of brittle...

International Journal of Advanced Computer Science and Applications, 2020
Plant is exposed to many attacks from various micro-organism, bacterial disease and pests. The sy... more Plant is exposed to many attacks from various micro-organism, bacterial disease and pests. The symptoms of the attacks are usually distinguished through the leaves, stem or fruit inspection. Disease that are commonly attack plants are Powdery Mildew and Leaf Blight and it may cause severe damaged if not controlled in early stages. Image processing has widely being used for identification, detection, grading and quality inspection in the agriculture field. Detection and identification disease of a plant is very important especially, in producing a high-quality fruit. Leaves of a plant can be used to determine the health status of that plant. The objective of this work is to develop a system that capable to detect and identify the type of disease based on Blobs Detection and Statistical Analysis. A total 45 sample leaves images from different colour and type were used and the accuracy is analysed. The Blobs Detection technique are used to detect the healthiness of plant leaves. While Statistical Analysis is used by calculating the Standard Deviation and Mean value to identify the type disease. Result is compared with manual inspection and it is found that the system has 86% in accuracy compared to manual detection process.

Asia-Pacific Journal of Information Technology & Multimedia, 2018
With increasing demand on digital images, there is a need to compress the image to entertain the ... more With increasing demand on digital images, there is a need to compress the image to entertain the limited bandwidth and storage capacity. Recently, there is a growing interest among researchers focusing on compression of various types of images and data. Amongst various compression algorithms, transform-based compression is one of the promising algorithms. Despite the technological advances in transmission and storage, the demands placed on the bandwidth of communication and storage capacities by far outstrips its availability. This paper presents a review of image compression principle, compression techniques and various thresholding algorithms (pre-processing algorithms) and quantization algorithm (post-processing algorithms). This paper intends to give an overview to the relevant parties to choose the suitable image compression algorithms to suit with the need.
MATEC Web of Conferences, 2018
This paper investigates the existing practices and prospects of medical data classification based... more This paper investigates the existing practices and prospects of medical data classification based on data mining techniques. It highlights major advanced classification approaches used to enhance classification accuracy. Past research has provided literature on medical data classification using data mining techniques. From extensive literature analysis, it is found that data mining techniques are very effective for the task of classification. This paper analysed comparatively the current advancement in the classification of medical data. The findings of the study showed that the existing classification of medical data can be improved further. Nonetheless, there should be more research to ascertain and lessen the ambiguities for classification to gain better precision.

Advances in Intelligent Systems and Computing, 2016
Recent advances in the field of image processing have revealed that the level of noise in mammogr... more Recent advances in the field of image processing have revealed that the level of noise in mammogram images highly affect the images quality and classification performance of the classifiers. Whilst, numerous data mining techniques have been developed to achieve high efficiency and effectiveness for computer aided diagnosis systems. However, fuzzy soft set theory has been merely experimented for medical images. Thus, this study proposed a classifier based on fuzzy soft set with embedding wavelet de-noising filters. Therefore, the proposed methodology involved five steps namely: MIAS dataset, wavelet de-noising filters hard and soft threshold, region of interest identification, feature extraction and classification. Therefore, the feasibility of fuzzy soft set for classification of mammograms images has been scrutinized. Experimental results show that proposed classifier FussCyier provides the classification performance with Daub3 (Level 1) with accuracy 75.64% (hard threshold), precision 46.11%, recall 84.67%, F-Micro 60%. Thus, the results provide an alternative technique to categorize mammogram images.

Advances in Multimedia, 2013
The availability of heterogeneous devices has rapidly changed the way people access the World Wid... more The availability of heterogeneous devices has rapidly changed the way people access the World Wide Web that includes rich content applications such as video streaming, 3D games, video conferencing, and mobile TV. However, most of these devices' (i.e., mobile phone, PDA, smartphone, and tablet) capabilities differ in terms of built-in software and library (what they can display), display size (how the content appears), and battery supply (how long the content can be displayed). In order for the digital contents to fit the target device, content adaptation is required. There have been many projects focused on energy-aware-based content adaptation that have been designed with different goals and approaches. This paper reviews some of the representative content adaptation solutions that have been proposed during the last few years, in relation to energy consumption focusing on wireless multimedia streaming in mobile devices. Also, this paper categorizes the research work according t...

International Journal of Software Engineering and Its Applications, 2015
Digital mammograms are coupled with noise which makes de-noising a challenging problem. In the li... more Digital mammograms are coupled with noise which makes de-noising a challenging problem. In the literature, few wavelets like daubechies db3 and haar have been used for de-noising medical images. However, wavelet filters such as sym8, daubechies db4 and coif1 at certain level of soft and hard threshold have not been taken into account for mammogram images. Therefore, in this study five wavelet filters namely: haar, sym8, daubechies db3, db4 and coif1 at certain level of soft and hard threshold have been considered. Later, peak signal to noise ratio and mean squared error values are calculated. From the obtained results, it can be concluded that db3 (46.44656 db for hard threshold and 43.80779 db for soft threshold) is more appropriate filter for de-noising mammogram images while compared with other wavelets filters.

International journal of engineering trends and technology, Jul 25, 2020
Software test cases can be defined as a set of condition where a tester needs to test and determi... more Software test cases can be defined as a set of condition where a tester needs to test and determine that the System Under Test (SUT) satisfied with the expected result correctly. This paper discusses the optimization technique in generating cases automatically by using EpiT (Eclipse Plug-in Tool). EpiT is developed to optimize the generation of test cases from source code in order to reduce time used for conventional manually creating test cases. By using code smell functionality, EpiT helps to generate test cases automatically from Java programs by checking its line of code (LOC). The implementation of EpiT will also be presented based on several case studies conducted to show the optimization of the test cases generated. Based on the results presented, EpiT is proven to solve the problem for software tester to generate test case manually and check the optimization from the source code using code smell technique.
arXiv (Cornell University), Dec 13, 2011
In this paper, the authors propose a new algorithm to hide data inside image using steganography ... more In this paper, the authors propose a new algorithm to hide data inside image using steganography technique. The proposed algorithm uses binary codes and pixels inside an image. The zipped file is used before it is converted to binary codes to maximize the storage of data inside the image. By applying the proposed algorithm, a system called Steganography Imaging System (SIS) is developed. The system is then tested to see the viability of the proposed algorithm. Various sizes of data are stored inside the images and the PSNR (Peak signal-to-noise ratio) is also captured for each of the images tested. Based on the PSNR value of each images, the stego image has a higher PSNR value. Hence this new steganography algorithm is very efficient to hide the data inside the image.

International Journal of Advanced Computer Science and Applications
Currently, software complexity and size has been steadily growing, while the variety of testing h... more Currently, software complexity and size has been steadily growing, while the variety of testing has also been increased as well. The quality of software testing must be improved to meet deadlines and reduce development testing costs. Testing software manually is time consuming, while automation saves time and money as well as increasing test coverage and accuracy. Over the last several years, many approaches to automate test case creation have been proposed. Model-based testing (MBT) is a test design technique that supports the automation of software testing processes by generating test artefacts based on a system model that represents the system under test's (SUT) behavioral aspects. The optimization technique for automatically generating test cases using Sena TLS-Parser is discussed in this paper. Sena TLS-Parser is developed as a Plug-in Tool to generate test cases automatically and reduce the time spent manually creating test cases. The process of generating test cases automatically by Sena TLS-Parser is be presented through several case studies. Experimental results on six publicly available java applications show that the proposed framework for Sena TLS-Parser outperforms other automated test case generation frameworks. Sena TLS-Parser has been shown to solve the problem of software testers manually creating test cases, while able to complete optimization in a shorter period of time.

Journal of King Saud University - Computer and Information Sciences, 2018
In software development life cycle (SDLC), the testing phase is important to test the functionali... more In software development life cycle (SDLC), the testing phase is important to test the functionalities of any software. In this phase, test cases are generated to test software functionalities. This paper presents an approach on how to detect and refactor code smells from the source codes of an Android application in order to reduce the redundancy in test case generation. Refactoring is one of the vital activities in software development and maintenance. Refactoring is a process of code alteration that aims to make structural modifications to the source code without altering any external functionality. These changes often improve software quality such as readability, execution time and maintainability. The proposed approach is then implemented and evaluated in order to determine its effectiveness in reducing the redundancy of test case generation.
Uploads
Papers by Rosziati Ibrahim