Proceedings Volume 9159

Sixth International Conference on Digital Image Processing (ICDIP 2014)

Charles M. Falco, Chin-Chen Chang, Xudong Jiang
cover
Proceedings Volume 9159

Sixth International Conference on Digital Image Processing (ICDIP 2014)

Charles M. Falco, Chin-Chen Chang, Xudong Jiang
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 6 May 2014
Contents: 2 Sessions, 80 Papers, 0 Presentations
Conference: Sixth International Conference on Digital Image Processing 2014
Volume Number: 9159

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9159
  • Sixth International Conference on Digital Image Processing (ICDIP 2014)
Front Matter: Volume 9159
icon_mobile_dropdown
Front Matter: Volume 9159
This PDF file contains the front matter associated with SPIE Proceedings Volume 9159, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Sixth International Conference on Digital Image Processing (ICDIP 2014)
icon_mobile_dropdown
Recognition methods on cloud amount, movement of clouds, and rain clouds for rainfall prediction using whole sky images
Kazuma Fujinuma, Masayuki Arai
The final target of our research is to develop a system for forecasting local concentrated heavy rain, such as guerrilla rainstorms, by using whole sky images taken on the ground. To construct this system, this paper proposes the following recognition methods: cloud amount, movement of clouds, and rain clouds. The experimental results show that red/blue (R/B) values are efficient for measuring the cloud amount. However, using the gravity of images and the difference among time-sequenced images is insufficient to recognize the movement of clouds and does not correlate well with the R/B values and rain.
Extracting contours of oval-shaped objects by Hough transform and minimal path algorithms
Circular and oval-like objects are very common in cell and micro biology. These objects need to be analyzed, and to that end, digitized images from the microscope are used so as to come to an automated analysis pipeline. It is essential to detect all the objects in an image as well as to extract the exact contour of each individual object. In this manner it becomes possible to perform measurements on these objects, i.e. shape and texture features. Our measurement objective is achieved by probing contour detection through dynamic programming. In this paper we describe a method that uses Hough transform and two minimal path algorithms to detect contours of (ovoid-like) objects. These algorithms are based on an existing grey-weighted distance transform and a new algorithm to extract the circular shortest path in an image. The methods are tested on an artificial dataset of a 1000 images, with an F1-score of 0.972. In a case study with yeast cells, contours from our methods were compared with another solution using Pratt’s figure of merit. Results indicate that our methods were more precise based on a comparison with a ground-truth dataset. As far as yeast cells are concerned, the segmentation and measurement results enable, in future work, to retrieve information from different developmental stages of the cell using complex features.
The 3D reconstruction of greenhouse tomato plant based on real organ samples and parametric L-system
Longjiao Xin, Lihong Xu, Dawei Li, et al.
In this paper, a fast and effective 3D reconstruction method for the growth of greenhouse tomato plant is proposed by using real organ samples and a parametric L-system. By analyzing the stereo structure of tomato plant, we extracts rules and parameters to assemble an L-system that is able to simulate the plant growth, and then the components of the L-system are translated into plant organ entities via image processing and computer graphics techniques. This method can efficiently and faithfully simulate the growing process of the greenhouse tomato plant.
Automatic detection and segmentation of stems of potted tomato plant using Kinect
Daichang Fu, Lihong Xu, Dawei Li, et al.
The automatic segmentation and recognition of greenhouse crop is an important aspect in digitized facility agriculture. Crop stems are closely related with the growth of the crop. Meanwhile, they are also an important physiological trait to identify the species of plants. For these reasons, this paper focuses on the digitization process to collect and analysis stems of greenhouse plants (tomatoes). An algorithm for automatic stem detection and extraction is proposed, based on a cheap and effective stereo vision system—Kinect. In order to demonstrate the usefulness and the potential applicability of our algorithm, a virtual tomato plant, whose stems are rendered by segmented stem texture samples, is reconstructed on OpenGL graphic platform.
MMW and THz images denoising based on adaptive CBM3D
Li Dai, Yousai Zhang, Yuanjiang Li, et al.
Over the past decades, millimeter wave and terahertz radiation has received a lot of interest due to advances in emission and detection technologies which allowed the widely application of the millimeter wave and terahertz imaging technology. This paper focuses on solving the problem of this sort of images existing stripe noise, block effect and other interfered information. A new kind of nonlocal average method is put forward. Suitable level Gaussian noise is added to resonate with the image. Adaptive color block-matching 3D filtering is used to denoise. Experimental results demonstrate that it improves the visual effect and removes interference at the same time, making the analysis of the image and target detection more easily.
Polyphase orthogonal waveform optimization for MIMO-SAR using Genetic Algorithm
Wael Mehany, Licheng Jiao, Khaled Hussien
A Multi-Input Multi-Output (MIMO) radar can be used to form a synthetic aperture for high resolution imaging. To successfully utilize the MIMO Synthetic Aperture Radar (SAR) system for practical imaging application, the orthogonal waveform design plays a critical role in image formation. Focusing on the SAR application, a definition for synthetic Integrated Side-Lobe level Ratio (ISLR) is proposed. In this paper a cost function containing ISLR and Peak Side-Lobe level Ratio (PSLR) is presented. A Genetic Algorithm (GA) is presented to numerically optimize orthogonal polyphase code sets design. The obtained waveform can be implemented for MIMO-SAR systems to improve the resolution. The simulation results show that the superiority of the proposed algorithm over other algorithms for the design of polyphase code sets used in MIMO-SAR.
An estimation of distribution algorithm (EDA) variant with QGA for Flowshop scheduling problem
Muhammad Shahid Latif, Zhou Hong, Amir Ali
In this research article, a hybrid approach is presented which based on well-known meta-heuristics algorithms. This study based on integration of Quantum Genetic Algorithm (QGA) and Estimation of Distribution Algorithm, EDA, (for simplicity we use Q-EDA) for flowshop scheduling, a well-known NP hard Problem, while focusing on the total flow time minimization criterion. A relatively new method has been adopted for the encoding of jobs sequence in flowshop known as angel rotations instead of random keys, so QGA become more efficient. Further, EDA has been integrated to update the population of QGA by making a probability model. This probabilistic model is built and used to generate new candidate solutions which comprised on best individuals, obtained after several repetitions of proposed (Q-EDA) approach. As both heuristics based on probabilistic characteristics, so exhibits excellent learning capability and have minimum chances of being trapped in local optima. The results obtained during this study are presented and compared with contemporary approaches in literature. The current hybrid Q-EDA has implemented on different benchmark problems. The experiments has showed better convergence and results. It is concluded that hybrid Q-EDA algorithm can generally produce better results while implemented for Flowshop Scheduling Problem (FSSP).
Focusing wide bandwidth and wide swath synthetic aperture sonar data using modified nonlinear chirp-scaling imaging algorithm
Zhen Tian, He-Ping Zhong, Jin-Song Tang
To solve the Synthetic Aperture Sonar (SAS) imaging problem with a wide bandwidth transmitted signal and a wide swath, a novel modified Nonlinear Chirp-Scaling (NCS) imaging algorithm is proposed. The first key step is to reduce the phase error by preserving the fourth order of the Taylor expansion for two-dimensional spectrum. To compensate the high order phase error sufficiently, the second key step is to derive a series of more exact relational parameters resulted from the third and fourth phase filtering and NCS operation by considering the change of linear and second nonlinear for equation frequency modulation slant rate. This operation increases the wide of swath for SAS with a wide bandwidth transmitted signal. The results of simulation show the accuracy and validity of the proposed modified NCS algorithm.
Estimation of vertical surface directions in outdoor environments on changes of the incident angle of sunlight in time series of observation images
Kyota Aoki, Teppei Yamamura
This paper proposes a method to estimate the directions of vertical surfaces in outdoor environments based on the changes of the incident angle of the sunlight in a series of observation images caught with a fixed camera. This method uses an interaction between a time when an incidence angle of the sunlight for every direction of a vertical surface is at the minimum and a time when a brightness of every pixel in the series of observation image is at the maximum. This method is not robust about the weather changes. This paper introduces the method integrating multi-day estimations. With this multi-day integration, the proposed direction estimation method is robust about the weather changes. And then, this paper shows experiments on real out-door images.
Research on loss recovery of application layer multicast
Xinfeng Li, Huiling Shi, Zhenghao Niu, et al.
As an alternative of IP Multicast, ALM implements multicast functionality at the application layer instead of the IP layer, which addresses the problem of non-ubiquitous deployment of IP multicast. However, the reliability of ALM is low because dynamic hosts forward the data. This paper analyzes the error and delivery features of ALM trees, and further presents a data loss recovery solution (called HBHLR) for application layer multicast.
Approximate entropy: a new evaluation approach of mental workload under multitask conditions
Lei Yao, Xiaoling Li, Wei Wang, et al.
There are numerous instruments and an abundance of complex information in the traditional cockpit display-control system, and pilots require a long time to familiarize themselves with the cockpit interface. This can cause accidents when they cope with emergency events, suggesting that it is necessary to evaluate pilot cognitive workload. In order to establish a simplified method to evaluate cognitive workload under a multitask condition. We designed a series of experiments involving different instrument panels and collected electroencephalograms (EEG) from 10 healthy volunteers. The data were classified and analyzed with an approximate entropy (ApEn) signal processing. ApEn increased with increasing experiment difficulty, suggesting that it can be used to evaluate cognitive workload. Our results demonstrate that ApEn can be used as an evaluation criteria of cognitive workload and has good specificity and sensitivity. Moreover, we determined an empirical formula to assess the cognitive workload interval, which can simplify cognitive workload evaluation under multitask conditions.
Computer image analysis in obtaining characteristics of images: greenhouse tomatoes in the process of generating learning sets of artificial neural networks
M. Zaborowicz, J. Przybył, K. Koszela, et al.
The aim of the project was to make the software which on the basis on image of greenhouse tomato allows for the extraction of its characteristics. Data gathered during the image analysis and processing were used to build learning sets of artificial neural networks. Program enables to process pictures in jpeg format, acquisition of statistical information of the picture and export them to an external file. Produced software is intended to batch analyze collected research material and obtained information saved as a csv file. Program allows for analysis of 33 independent parameters implicitly to describe tested image. The application is dedicated to processing and image analysis of greenhouse tomatoes. The program can be used for analysis of other fruits and vegetables of a spherical shape.
A simple approach for 3D reconstruction of the spine from biplanar radiography
Junhua Zhang, Xinling Shi, Liang Lv, et al.
This paper proposed a simple approach for 3D spinal reconstruction from biplanar radiography. The proposed reconstruction consisted in reconstructing the 3D central curve of the spine based on the epipolar geometry and automatically aligning vertebrae under the constraint of this curve. The vertebral orientations were adjusted by matching the projections of the 3D pedicles with the 2D pedicles in biplanar radiographs. The user interaction time was within one minute for a thoracic spine. Sixteen pairs of radiographs of a thoracic spinal model were used to evaluate the precision and accuracy. The precision was within 3.1 mm for the location and 3.5° for the orientation. The accuracy was within 3.5 mm for the location and 3.9° for the orientation. These results demonstrate that this approach can be a promising tool to obtain the 3D spinal geometry with acceptable user interactions in scoliotic clinics.
Performance of MIMO-OFDM using convolution codes with QAM modulation
I Gede Puja Astawa, Yoedy Moegiharto, Ahmad Zainudin, et al.
Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier which transmits Rayleigh multipath fading channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2×2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4×4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4×4 MIMO-OFDM system without coding, power saving 7 dB of 2×2 MIMO-OFDM and significant power savings from SISO-OFDM system.
Visualization of the tire-soil interaction area by means of ObjectARX programming interface
W. Mueller, M. Gruszczyński, B. Raba, et al.
The process of data visualization, important for their analysis, becomes problematic when large data sets generated via computer simulations are available. This problem concerns, among others, the models that describe the geometry of tire-soil interaction. For the purpose of a graphical representation of this area and implementation of various geometric calculations the authors have developed a plug-in application for AutoCAD, based on the latest technologies, including ObjectARX, LINQ and the use of Visual Studio platform. Selected programming tools offer a wide variety of IT structures that enable data visualization and data analysis and are important e.g. in model verification.
The non-touching method of the malting barley quality evaluation
B. Raba, K. Nowakowski, A. Lewicki, et al.
The first important stage of the malt production processes is the malting barley quality evaluation. Presented project was focused on the visual features of malting barley grains. The principal aim was to elaborate complete methodology to determine the level of grains contamination. The article describes the mechanisms of choosing parameters which can distinguish useful for the malt production grains from defects and impurities. Original computer system 'Hordeum v 3.1' helped obtain graphical data from images of contaminated barley samples. Research carried out in this area can improve the quality evaluation process of malting barley.
A halftone visual cryptography schema using ordered dither
Liu-Ping Feng, Dong-Sheng Cong, Hua-Qun Liu, et al.
Visual cryptography is a cryptographic technique which allows visual information to be encrypted in such a way that the decryption can be performed by the human visual system, without the aid of computers. This paper proposes a schema of information hiding within the method of visual cryptography. The gray image is changed into two halftone images via the different dither matrixes respectively, and the secret binary pixels are encoded into shares. The secret information can be restored by stacking the different shared halftone images together. Simulation results show that the secret binary image can be decoded efficiently.
A UML-based metamodel for software evolution process
Zuo Jiang, Wei-Hong Zhou, Zhi-Tao Fu, et al.
A software evolution process is a set of interrelated software processes under which the corresponding software is evolving. An object-oriented software evolution process meta-model (OO-EPMM), abstract syntax and formal OCL constraint of meta-model are presented in this paper. OO-EPMM can not only represent software development process, but also represent software evolution.
Hybrid DE-PSO assisted MBER beam-forming for QAM systems
Xinying Guo, Zhe Zhang, Jiankang Zhang, et al.
The Minimum Bit Error Rate (MBER) detector is capable of outperforming the conventional Minimum Mean Squared Error (MMSE) detector by minimizing the Bit Error Rate (BER) directly. In this paper, we proposed a hybrid Differential Evolution-Particle Swarm Optimization (DE-PSO) assisted MBER beamforming scheme for multipleantenna systems employed with Quadrature Amplitude Modulation (QAM) systems. The proposed hybrid scheme coalesced the merits of DE algorithm and PSO algorithm, meanwhile compensated the deficiencies of each other. Theoretical analysis and simulation results demonstrated that the proposed hybrid DE-PSO assisted MBER beamforming scheme outperformed the existing DE aided MBER beamforming scheme in converging speed.
Single sample face recognition based on virtual images and 2DLDA
Jun Yang, Yanli Liu
When there is only one sample per person in gallery set, the conventional face recognition methods which work with many training samples do not work well. Especially, a number of methods based on Fisher linear discrimination criterion cannot work because the within-class scatter matrix is a matrix with all elements being zero. To solve this problem, a method was proposed to get virtual sub images of one face by an image processing method. With these virtual images, the within-class scatter matrix can be evaluated and the supervised learning method such as 2D fisher linear discrimination analysis can be utilized for feature extraction. The experimental results on ORL face database show that the proposed method is efficient and it can achieve higher recognition accuracy than others.
A reconfigurable ASIP for high-throughput and flexible FFT processing in SDR environment
Ting Chen, Hengzhu Liu, Botao Zhang
This paper presents a high-throughput and reconfigurable processor for fast Fourier transformation (FFT) processing based on SDR methodology. It adopts application specific instruction-set (ASIP) and single instruction multiple data (SIMD) architecture to exploit the parallelism of butterfly operations in FFT algorithm. Moreover, a novel 3-dimension multi-bank memory is proposed for parallel conflict-free accesses. The overall throughput and power-efficiency are greatly enhanced by parallel and streamline processing. A test chip supporting 64~2048-point FFT is setup for experiment. Logic synthesis reveals a maximum clock frequency of 500MHz and an area of 0.49 mm2 for the processor's logic using a low power 45-nm technology, and the dynamic power estimation is about 96.6mW. Compared with previous works, our FFT ASIP achieves a higher energy-efficiency with relative low area cost.
A new method of 3D scene recognition from still images
Li-ming Zheng, Xing-song Wang
Most methods of monocular visual three dimensional (3D) scene recognition involve supervised machine learning. However, these methods often rely on prior knowledge. Specifically, they learn the image scene as part of a training dataset. For this reason, when the sampling equipment or scene is changed, monocular visual 3D scene recognition may fail. To cope with this problem, a new method of unsupervised learning for monocular visual 3D scene recognition is here proposed. First, the image is made using superpixel segmentation based on the CIELAB color space values L, a, and b and on the coordinate values x and y of pixels, forming a superpixel image with a specific density. Second, a spectral clustering algorithm based on the superpixels’ color characteristics and neighboring relationships was used to reduce the dimensions of the superpixel image. Third, the fuzzy distribution density functions representing sky, ground, and façade are multiplied with the segment pixels, where the expectations of these segments are obtained. A preliminary classification of sky, ground, and façade is generated in this way. Fourth, the most accurate classification images of sky, ground, and façade were extracted through the tier-1 wavelet sampling and Manhattan direction feature. Finally, a depth perception map is generated based on the pinhole imaging model and the linear perspective information of ground surface. Here, 400 images of Make3D Image data from the Cornell University website were used to test the algorithm. The experimental results showed that this unsupervised learning method provides a more effective monocular visual 3D scene recognition model than other methods.
Estimation and prediction of noise power based on variational Bayesian and adaptive ARMA time series
Jingyi Zhang, Yonggui Li, Yonggang Zhu, et al.
Estimation and prediction of noise power are very important for communication anti-jamming and efficient allocation of spectrum resources in adaptive wireless communication and cognitive radio. In order to estimate and predict the time-varying noise power caused by natural factors and jamming in the high frequency channel, Variational Bayesian algorithm and adaptive ARMA time series are proposed. Through establishing the time-varying noise power model, which controlled by the noise variance rate, the noise power can be estimated with Variational Bayesian algorithm, and the results show that the estimation error is related to observation interval. What’s more, through the analysis of the correlation characteristics of the estimation power, noise power can be predicted based on adaptive ARMA time series, and the results show that it will be available to predict the noise power in next 5 intervals with the proportional error less than 0.2.
The analysis and design of high speed double delta sampling circuit for CMOS image sensor
Xiaohui Liu, Yuanfu Zhao, Liyan Liu, et al.
A high-speed double delta sampling (DDS) circuit with pipelined structure for CMOS image sensor (CIS) is presented. Considering the low readout speed of the DDS circuit compare with correcting double sampling (CDS) circuit, We separate the main operation of DDS circuit into two steps, and run the two steps alternately in odd readout column and even readout column, which seems like the pipelined operation. Thus, the readout speed of the DDS will as twice as fast than the traditional DDS. The architecture and readout sequence of the new circuit are introduced in detail. Meanwhile simulation results indicate the proposed circuit can achieve a high speed performance.
Neural image analysis in the process of quality assessment: domestic pig oocytes
P. Boniecki, J. Przybył, T. Kuzimska, et al.
The questions related to quality classification of animal oocytes are explored by numerous scientific and research centres. This research is important, particularly in the context of improving the breeding value of farm animals. The methods leading to the stimulation of normal development of a larger number of fertilised animal oocytes in extracorporeal conditions are of special importance. Growing interest in the techniques of supported reproduction resulted in searching for new, increasingly effective methods for quality assessment of mammalian gametes and embryos. Progress in the production of in vitro animal embryos in fact depends on proper classification of obtained oocytes. The aim of this paper was the development of an original method for quality assessment of oocytes, performed on the basis of their graphical presentation in the form of microscopic digital images. The classification process was implemented on the basis of the information coded in the form of microphotographic pictures of the oocytes of domestic pig, using the modern methods of neural image analysis.
An improved design method for EPC middleware
Guohuan Lou, Ran Xu, Chunming Yang
For currently existed problems and difficulties during the small and medium enterprises use EPC (Electronic Product Code) ALE (Application Level Events) specification to achieved middleware, based on the analysis of principle of EPC Middleware, an improved design method for EPC middleware is presented. This method combines the powerful function of MySQL database, uses database to connect reader-writer with upper application system, instead of development of ALE application program interface to achieve a middleware with general function. This structure is simple and easy to implement and maintain. Under this structure, different types of reader-writers added can be configured conveniently and the expandability of the system is improved.
Discriminative dictionary based representation and classification of image texture
Texture classification is a fundamental and yet difficult task in machine vision and image processing. In recent years, more and more researchers' attention has been drawn to the sparse representation-based classification (SRC) method and its corresponding dictionaries designing in pattern recognition community, due to its high recognition rate, robustness to corruption and occlusion, and little dependence on the features, etc. In this paper, we present a discriminative dictionary learning approach, and apply it to the sparse representation based classification framework for image texture representation and classification. The experimental results conducted on different testing data demonstrate the promise of our new approach when compared with the previous algorithms.
Analysis of measured data of human body based on error correcting frequency
Aiyan Jin, Gao Peipei, Xiaomei Shang
Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.
NSCT-based fusion enhancement for multispectral finger-vein images
Dongdong Wu, Jinfeng Yang
Personal identification based on single-spectral finger-vein image has been widely investigated recently. However, in finger-vein imaging, finger-vein image degradation is the main factor causing lower recognition accuracy. So, to improve the finger-vein image quality, in this paper, multispectral finger-vein images (760nm and 850nm) are fused together for contrast enhancement using NSCT transformation. The proposed method can preserve the completeness and sharpness of finger-vein. Experimental results demonstrate that the proposed method is certainly powerful in enhancing finger-vein image contrast and achieves lower equal error rates in finger-vein recognition even if original images have poor contrast.
Initial parameters problem of WNN based on particle swarm optimization
Chi-I Yang, Kaicheng Wang, Kueifang Chang
The stock price prediction by the wavelet neural network is about minimizing RMSE by adjusting the parameters of initial values of network, training data percentage, and the threshold value in order to predict the fluctuation of stock price in two weeks. The objective of this dissertation is to reduce the number of parameters to be adjusted for achieving the minimization of RMSE. There are three kinds of parameters of initial value of network: w , t , and d . The optimization of these three parameters will be conducted by the Particle Swarm Optimization method, and comparison will be made with the performance of original program, proving that RMSE can be even less than the one before the optimization. It has also been shown in this dissertation that there is no need for adjusting training data percentage and threshold value for 68% of the stocks when the training data percentage is set at 10% and the threshold value is set at 0.01.
Organoleptic damage classification of potatoes with the use of image analysis in production process
K. Przybył, M. Zaborowicz, K. Koszela, et al.
In the agro-food sector security it is required the safety of a healthy food. Therefore, the farms are inspected by the quality standards of production in all sectors of production. Farms must meet the requirements dictated by the legal regulations in force in the European Union. Currently, manufacturers are seeking to make their food products have become unbeatable.

This gives you the chance to form their own brand on the market. In addition, they use technologies that can increase the scale of production. Moreover, in the manufacturing process they tend to maintain a high level of quality of their products.

Potatoes may be included in this group of agricultural products. Potatoes have become one of the major and popular edible plants. Globally, potatoes are used for consumption at 60%, Poland 40%. This is due to primarily advantages, consumer and nutritional qualities. Potatoes are easy to digest. Medium sized potato bigger than 60 mm in diameter contains only about 60 calories and very little fat. Moreover, it is the source of many vitamins such as vitamin C, vitamin B1, vitamin B2, vitamin E, etc. [1]. The parameters of quality consumer form, called organoleptic sensory properties, are evaluated by means of sensory organs by using the point method. The most important are: flavor, flesh color, darkening of the tuber flesh when raw and after cooking.

In the production process it is important to adequate, relevant and accurate preparing potatoes for use and sale. Evaluation of the quality of potatoes is determined on the basis of organoleptic quality standards for potatoes. Therefore, there is a need to automate this process. To do this, use the appropriate tools, image analysis and classification models using artificial neural networks that will help assess the quality of potatoes [2, 3, 4].
Subway tunnel crack identification algorithm research based on image processing
Biao Bai, Liqiang Zhu, Yaodong Wang
The detection of cracks in tunnels has profound impact on the tunnel’s safety. It’s common for low contrast, uneven illumination and severe noise pollution in tunnel surface images. As traditional image processing algorithms are not suitable for detecting tunnel cracks, a new image processing method for detecting cracks in surface images of subway tunnels is presented in this paper. This algorithm includes two steps. The first step is a preprocessing which uses global and local methods simultaneously. The second step is the elimination of different types of noises based on the connected components. The experimental results show that the proposed algorithm is effective for detecting tunnel surface cracks.
A multi-approach feature extractions for iris recognition
Sanpachai H., Settapong M.
Biometrics is a promising technique that is used to identify individual traits and characteristics. Iris recognition is one of the most reliable biometric methods. As iris texture and color is fully developed within a year of birth, it remains unchanged throughout a person’s life. Contrary to fingerprint, which can be altered due to several aspects including accidental damage, dry or oily skin and dust. Although iris recognition has been studied for more than a decade, there are limited commercial products available due to its arduous requirement such as camera resolution, hardware size, expensive equipment and computational complexity. However, at the present time, technology has overcome these obstacles. Iris recognition can be done through several sequential steps which include pre-processing, features extractions, post-processing, and matching stage. In this paper, we adopted the directional high-low pass filter for feature extraction. A box-counting fractal dimension and Iris code have been proposed as feature representations. Our approach has been tested on CASIA Iris Image database and the results are considered successful.
Nuclear norm-regularized k-space-based parallel imaging reconstruction
Lin Xu, Xiaoyun Liu
Parallel imaging reconstruction suffers from serious noise amplification at high accelerations that can be alleviated with regularization by imposing some prior information or constraints on image. Nevertheless, point-wise interpolation of missing k-space data restricts the use of prior information in k-space-based parallel imaging reconstructions like generalized auto-calibrating partial acquisitions (GRAPPA). In this study, a regularized k-space based parallel imaging reconstruction is presented. We first formulate the reconstruction of missing data within a patch as a linear inverse problem. Instead of exploiting prior information on image or its transform domain, the proposed method exploits the rank deficiency of structured matrix consisting of vectorized patches form entire k-space, which leads to a nuclear norm-regularized problem solved by the numeric algorithms iteratively. Brain imaging studies are performed, demonstrating that the proposed method is capable of mitigating noise at high accelerations in GRAPPA reconstruction.
Design of SPARC V8 superscalar pipeline applied Tomasulo's algorithm
Xue Yang, Lixin Yu, Yunkai Feng
A superscalar pipeline applied Tomasulo’s algorithm is presented in this paper. The design begins with a dual-issue superscalar processor based on LEON2. Tomasulo’s algorithm is adopted to implement out-of-order execution. Instructions are separated into three different parts and executed by three different function units so as to reduce area and promote execution speed. Results wrote back into registers are still in program order, for the aim of ensure the function veracity. Mechanisms of the reservation station, common data bus, and reorder buffer are presented in detail. The structure can sends and executes three instructions at most at a time. Branch prediction can also be realized by reorder buffer. Performance of the scalar pipeline applied Tomasulo’s algorithm is promoted by 41.31% compared to singleissue pipeline..
The application of computer image analysis in life sciences and environmental engineering
R. Mazur, A. Lewicki, K. Przybył, et al.
The main aim of the article was to present research on the application of computer image analysis in Life Science and Environmental Engineering. The authors used different methods of computer image analysis in developing of an innovative biotest in modern biomonitoring of water quality. Created tools were based on live organisms such as bioindicators Lemna minor L. and Hydra vulgaris Pallas as well as computer image analysis method in the assessment of negatives reactions during the exposition of the organisms to selected water toxicants. All of these methods belong to acute toxicity tests and are particularly essential in ecotoxicological assessment of water pollutants. Developed bioassays can be used not only in scientific research but are also applicable in environmental engineering and agriculture in the study of adverse effects on water quality of various compounds used in agriculture and industry.
The use of speckle strain analysis to identify plastic zone formation in complex composites
AWE are currently undergoing the process of developing highly loaded energetic composite formulations and require a model to predict the mechanical behaviour. It was decided to develop a constituent model to allow for the determination of mechanical properties of these composites, whilst reducing the amount of physical handling of the materials. Fracture testing of a reference material composed of the chosen filler and a binder with known mechanical properties was undertaken. DIC was used to generate strain maps during mechanical testing. This in turn was used to populate these predictive models.
Quality assessment of microwave-vacuum dried material with the use of computer image analysis and neural model
K. Koszela, J. Otrząsek, M. Zaborowicz, et al.
The farming area for vegetables in Poland is constantly changed and modified. Each year the cultivation structure of particular vegetables is different. However, it is the cultivation of carrots that plays a significant role among vegetables. According to the Main Statistical Office (GUS), in 2012 carrot held second position among the cultivated root vegetables, and it was estimated at 835 thousand tons. In the world we are perceived as the leading producer of carrot, due to the fourth place in the ranking of global producers. Poland is the largest producer of this vegetable in the EU [1]. It is also noteworthy, that the demand for dried vegetables is still increasing. This tendency affects the development of drying industry in our country, contributing to utilization of the product surplus. Dried vegetables are used increasingly often in various sectors of food products industry, due to high nutrition value, as well as to changing alimentary preferences of consumers [2-3]. Dried carrot plays a crucial role among dried vegetables, because of its wide scope of use and high nutrition value. It contains a lot of carotene and sugar present in the form of crystals. Carrot also undergoes many different drying processes, which makes it difficult to perform a reliable quality assessment and classification of this dried material. One of many qualitative properties of dried carrot, having important influence on a positive or negative result of the quality assessment, is color and shape. The aim of the research project was to develop a method for the analysis of microwave-vacuum dried carrot images, and its application for the classification of individual fractions in the sample studied for quality assessment. During the research digital photographs of dried carrot were taken, which constituted the basis for assessment performed by a dedicated computer programme developed as a part of the research. Consequently, using a neural model, the dried material was classified [4-6].
A no-reference objective image sharpness metric for perception and estimation
Shan Huang, Yong Liu, Haiqing Du
The article proposed a simple sharpness metric for perception and estimation. This block-based estimator uses both the edge-based and pixel-based methods all in the spatial domain. For each block, the estimation involves measuring the spread of edges and the comparison of the two variations of neighboring pixels in the original image and the same re-blurred image. Accounting for visual perception, these measures are then combined via a weighted geometric mean to a final value that quantifies the global sharpness of the image.
Median-based thresholding, minimum error thresholding, and their relationships with histogram-based image similarity
Yaobin Zou, Lulu Fang, Fangmin Dong, et al.
A popular histogram-based thresholding method is minimum error thresholding (MET) proposed by Kittler and Illingworth [Minimum error thresholding, Pattern Recognition 19 (1) (1986) 41-47], whereas Xue and Titterington recently proposed a median-based thresholding (MBT) [Median-based image thresholding, Image and Vision Computing 29 (9) (2011) 631-637]. Both MET and MBT can be derived from the maximization of log-likelihood. In this paper, we present a different theoretical interpretation about MBT and MET, from the perspective of minimizing Kullback-Leibler (KL) divergence. Since the KL divergence is a measure of the difference between two probability distributions, it is reasonable to regard MET and MBT as the special applications of histogram-based image similarity (HBIS) in the image thresholding. Further, it is natural to suggest a more universal image thresholding framework based on image similarity concept, since HBIS is just one of many image similarity methodologies. This thresholding framework directly transforms the threshold determining problem into an image comparison issue. Its significance is that it provides a concise and clear theoretical framework for developing potential thresholding methods with the plentiful image similarity theories.
Completely blind image quality assessment based on gray-scale fluctuations
Xichen Yang, Quansen Sun, Tianshu Wang
Completely blind image quality assessment is the further development of no reference image quality assessment. And this kind quality assessment method is highly unsupervised, training free, and don’t rely on natural scene statistic model. This paper proposes a completely blind quality assessment method based on gray-scale fluctuations (GFQA). The new method uses a specific image primitive to analyze the image gray fluctuations. And the analysis result is used to assess the image quality and give the image quality assessment result directly. The experimental results show that the new method accords closely with human subjective judgments of diverse distorted images when tested on the large publicly available ‘LIVE’ Image Quality database.
A new representative criterion for image resampling based on bootstrap and plug in algorithm
Sabra Mabrouk, Slim M'hiri, Faouzi Ghorbel
In this paper we intend to introduce a new representativeness criterion of the Bootstrap sample for images segmentation. Using the plug-in method in order to estimate probability density functions (pdf), we present a robust and stable criterion based on L2 distance between the estimated probability density from the bootstrap sample and the empirical probability density of the image. This criterion is tested on satellite images.
Efficient image enhancement using subgroup region histogram equalization and decimation
Image enhancement techniques are used to make better visual quality of images. The paper proposed an efficient image enhancement algorithm. The proposed method contains three parts: decimation of image, Bright or dark image decision, and the contrast enhancement. It utilizes spatial decimation method for image resolution reduction to lessen the computational cost. In addition the proposed algorithm employs probability density value to control the sharp brightness in image. Histogram is divided into subgroup regions to enhance the image and the contrast of each region is adjusted separately. The simulation results show that the proposed method has significantly reduces the computational cost and performs better than the conventional method.
Stereo matching based on color image segmentation and cross adaptive window
Zetao Jiang, Lei Zhou, Le Zhou
A method of stereo matching which is based on color image segmentation and cross adaptive window has been used to solve the problems of matching imprecision in depth discontinuity regions and low texture regions. This article first produces the matching cost based on color segmentation region by segmenting color image and the one based on cross adaptive window which makes use of color similarity, and then integrates the two to form the combined matching cost. At last it uses the fast searching method of optical parallax to narrow the matching range and then raise the efficiency rate. The result of this experiment can improve the matching accuracy in depth discontinuity regions and low texture regions and raise the matching speed.
Lactiferous vessel detection from microscopic cross-sectional images
This paper presents the methods to detect and segment lactiferous vessels or rubber latex vessels from gray scale microscopic cross-sectional images using polynomial curve-fitting with maximum and minimum stationary points. Polynomial curve-fitting is used to detect the location of lactiferous vessels from an image of a non-dyed cross-sectional slice which was taken by a digital camera through microscope lens. The lactiferous vessels are then segmented from an image using maximum and minimum stationary points with morphological closing operation. Two species of rubber trees of age between one to two years old are sampled namely, RRIM600 and RRIT251. Two data sets contain 30 microscopic cross-sectional images of one-year old rubber tree’s stems from each species are used in the experiments and the results reveal that most of the lactiferous vessel areas can be segmented correctly.
Enhancement MSRCR algorithm of color fog image based on the adaptive scale
Yin Gao, Lijun Yun, Junsheng Shi, et al.
To deal with the image hue and saturation distortion problems in the traditional Retinex algorithm with color restoration coefficient, enhancement MSRCR algorithm of color fog image based on the adaptive scale is proposed. In the RGB color space, the color restoration coefficient is confirmed firstly. Then according to the each channel pixel values, the local weight correction function is introduced and the Gaussian kernel of the required scale is calculated. Doing local correction for the reflection component estimation and obtaining the multi-scale image by weighting. At last, the obtained image is used to contrast stretching and global Gamma correction in order to enhance the image. Through the subjective observation and objective evaluation, the algorithm is better than the traditional MSRCR algorithm in the overall and details.
Volume-based indirect illumination with irradiance decomposition
This paper proposes a fast and accurate algorithm for indirect illumination. It uses volumes of different resolutions to sample and cache the geometric information and the secondary lights. By dividing the irradiance into two parts, it treats the lights coming from the far-field and that coming from the near-field differently. For the far-field ones, it propagates sphere harmonic represented lights on coarse voxels. For the near-field ones, it shoots rays and collects their contributions on fine voxels. By doing this, the algorithm in this paper avoids using many rays to march long distance. In the experiments, it renders about ten times faster than the VGI algorithm to get the same image qualities, especially for the large and complex scenes. Meanwhile, it further accelerates the rendering by inventing an incremental multi-resolution gathering. The experiments illustrate fast and accurate indirect light effects.
Initialisation of 3D level set for hippocampus segmentation from volumetric brain MR images
Maryam Hajiesmaeili, Jamshid Dehmeshki, Bashir Bagheri Nakhjavanlo, et al.
Shrinkage of the hippocampus is a primary biomarker for Alzheimer’s disease and can be measured through accurate segmentation of brain MR images. The paper will describe the problem of initialisation of a 3D level set algorithm for hippocampus segmentation that must cope with the some challenging characteristics, such as small size, wide range of intensities, narrow width, and shape variation. In addition, MR images require bias correction, to account for additional inhomogeneity associated with the scanner technology. Due to these inhomogeneities, using a single initialisation seed region inside the hippocampus is prone to failure. Alternative initialisation strategies are explored, such as using multiple initialisations in different sections (such as the head, body and tail) of the hippocampus. The Dice metric is used to validate our segmentation results with respect to ground truth for a dataset of 25 MR images. Experimental results indicate significant improvement in segmentation performance using the multiple initialisations techniques, yielding more accurate segmentation results for the hippocampus.
3D scene reconstruction from multi-aperture images
Miao Mao, Kaihuai Qin
With the development of virtual reality, there is a growing demand for 3D modeling of real scenes. This paper proposes a novel 3D scene reconstruction framework based on multi-aperture images. Our framework consists of four parts. Firstly, images with different apertures are captured via programmable aperture. Secondly, we use SIFT method for feature point matching. Then we exploit binocular stereo vision to calculate camera parameters and 3D positions of matching points, forming a sparse 3D scene model. Finally, we apply patch-based multi-view stereo to obtain a dense 3D scene model. Experimental results show that our method is practical and effective to reconstruct dense 3D scene.
Hardware performance versus video quality trade-off for Gaussian mixture model based background identification systems
Mariangela Genovese, Ettore Napoli, Nicola Petra
Background identification is a fundamental task in many video processing systems. The Gaussian Mixture Model is a background identification algorithm that models the pixel luminance with a mixture of K Gaussian distributions. The number of Gaussian distributions determines the accuracy of the background model and the computational complexity of the algorithm. This paper compares two hardware implementations of the Gaussian Mixture Model that use three and five Gaussians per pixel. A trade off analysis is carried out by evaluating the quality of the processed video sequences and the hardware performances. The circuits are implemented on FPGA by exploiting state of the art, hardware oriented, formulation of the Gaussian Mixture Model equations and by using truncated binary multipliers. The results suggest that the circuit that uses three Gaussian distributions provides video with good accuracy while requiring significant less resources than the option that uses five Gaussian distributions per pixel.
Edge orientation spatiogram for silhouette image retrieval
Yan Yu, Tianjiang Wang, Yanli Liu
The edges provide important visual information. In order to enhance the description ability for image edges, a new descriptor is proposed by integrating the spatial distribution information of edge points with the edge orientation histogram (EOH), which is called edge orientation spatiogram (EOS). In order to invariant to scale transformation, translation, we utilize the polar radius to represent the spatial information for each edge point by using image centroid as origin. We test EOS on the MPEG7 database with several similarity measure methods and evaluate the performance in silhouette image retrieval scenario. The experimental results show that the edge feature descriptor EOS has a better performance than the traditional descriptor EOH.
Fast algorithm of low power image reformation for OLED display
Myungwoo Lee, Taewhan Kim
We propose a fast algorithm of low-power image reformation for organic light-emitting diode (OLED) display. The proposed algorithm scales the image histogram in a way to reduce power consumption in OLED display by remapping the gray levels of the pixels in the image based on the fast analysis of the histogram of the input image while maintaining contrast of the image. The key idea is that a large number of gray levels are never used in the images and these gray levels can be effectively exploited to reduce power consumption. On the other hand, to maintain the image contrast the gray level remapping is performed by taking into account the object size in the image to which each gray level is applied, that is, reforming little for the gray levels in the objects of large size. Through experiments with 24 Kodak images, it is shown that our proposed algorithm is able to reduce the power consumption by 10% even with 9% contrast enhancement. Our algorithm runs in a linear time so that it can be applied to moving pictures with high resolution.
Complex wavelet based speckle reduction using multiple ultrasound images
Ultrasound imaging is a dominant tool for diagnosis and evaluation in medical imaging systems. However, as its major limitation is that the images it produces suffer from low quality due to the presence of speckle noise, to provide better clinical diagnoses, reducing this noise is essential. The key purpose of a speckle reduction algorithm is to obtain a speckle-free high-quality image whilst preserving important anatomical features, such as sharp edges. As this can be better achieved using multiple ultrasound images rather than a single image, we introduce a complex wavelet-based algorithm for the speckle reduction and sharp edge preservation of two-dimensional (2D) ultrasound images using multiple ultrasound images. The proposed algorithm does not rely on straightforward averaging of multiple images but, rather, in each scale, overlapped wavelet detail coefficients are weighted using dynamic threshold values and then reconstructed by averaging. Validation of the proposed algorithm is carried out using simulated and real images with synthetic speckle noise and phantom data consisting of multiple ultrasound images, with the experimental results demonstrating that speckle noise is significantly reduced whilst sharp edges without discernible distortions are preserved. The proposed approach performs better both qualitatively and quantitatively than previous existing approaches.
A new method for detecting variable-size infrared targets
Parviz Khaledian, Saed Moradi, Ehdieh Khaledian
In this paper, a new method to detect small targets in infrared images is presented. The proposed method is based on the optical responses of infrared photodetectors, mainly point spread function (PSF). The effects of PSF on gray level and shape of the targets is used to distinguish the targets from background clutter. Also, a novel operator called Lapalacian of Point Spread Function (LoPSF) is proposed to identify location and size of the infrared targets. Since detecting the variable-size targets in infrared search and track systems (IRST) is of critical importance, unlike previous methods of target detection, the proposed method can robustly detect variable-size targets. The proposed algorithm is applied on a set of real infrared images which are captured under different conditions such as target size, target distance, background clutter etc.. Simulation results demonstrate the effectiveness and validity of the proposed method in comparison with the other conventional methods.
A parametric representation and feature matching technique for 3D rigid registration: application to 3D face description
Wieme Gadacha, Faouzi Ghorbel
In this communication, a robust, fast and efficient surface registration approach for 3D surfaces is presented. The robusteness of such approach is based on particular features points involved in the process and that make the matching step more precise. Such feature descriptors are extracted from the superposition of two surfacic curves: geodesic levels and radial ones from local neighborhoods defined around reference points already picked on the surface. Moreover, by a generalized version of Shannon theorem, an optimal number of such descriptors points is identified in order to reduce the computational time. Thus, the obtained discretized parametrisation (ordered descriptors) is the basis of the matching phase that becomes obvious and more robust comparing to the classic ICP algorithm. An evaluation of the proposed approach is realized in the mean of complexity time, robusteness of the matching and efficiency. Experimentations are conducted on facial surfaces from the Bosphorus database to estimate the discriminative power in face description.
A new denoising approach based on EMD
Wei Wu, Hua Peng
This paper introduces a new approach based on Empirical Mode Decomposition (EMD) for explicitly denoising signals. The EMD decomposes a noisy signal into several intrinsic mode functions(IMFs), and the estimated signal is reconstructed by using the processed IMFs. In this paper, a piecewise EMD thresholding approach for denoising signal with strong noise is proposed. Simulation results show that the proposed approach has good performance, especially in the cases where the noise is very strong.
Robust real-time visual servo for fish tracking
Chi-Cheng Cheng, Chia-Wei Lin
This paper presents a robust real-time visual fish tracking system. The proposed visual servo framework is able to track a deformed target and maintain the target always inside the field of view. For the image processing, an efficient template matching and searching method using the mean-shift theory is developed. The robustness is achieved by appending the ratio histogram, a kernel function, and the template update to the framework when the target is deformed. Experimental results show that the presented scheme works successfully for real-time fish tracking missions. The visual tracking task can also be accomplished even when a similar object crosses over the target.
Automated quality control in a file-based broadcasting workflow
Lina Zhang
Benefit from the development of information and internet technologies, television broadcasting is transforming from inefficient tape-based production and distribution to integrated file-based workflows. However, no matter how many changes have took place, successful broadcasting still depends on the ability to deliver a consistent high quality signal to the audiences. After the transition from tape to file, traditional methods of manual quality control (QC) become inadequate, subjective, and inefficient. Based on China Central Television's full file-based workflow in the new site, this paper introduces an automated quality control test system for accurate detection of hidden troubles in media contents. It discusses the system framework and workflow control when the automated QC is added. It puts forward a QC criterion and brings forth a QC software followed this criterion. It also does some experiments on QC speed by adopting parallel processing and distributed computing. The performance of the test system shows that the adoption of automated QC can make the production effective and efficient, and help the station to achieve a competitive advantage in the media market.
Saliency location based on color contrast
Generally, the purpose of saliency detection models for saliency object detection and for fixation prediction is complementary. Saliency detection models for saliency object detection aim to discover as much as possible true positive, while saliency detection models for fixation prediction intend to generate few false positive. In this work, we attempt to combine their strength together. We accomplish this by, firstly, replacing high-level features that frequently used in a fixation prediction model with our new saliency location map in order to make the model more general. Secondly, we train a saliency detection model with human eye tracking data in order to make the model correspond well to the human eye fixation (without the use of top-down attention). We evaluate the performance of our new saliency location map on both saliency detection and fixation prediction datasets in comparison with six state-of-the-art saliency detection models. The experimental results show that the performance of our proposed method is superior to other methods in an application of saliency object detection on MSRA dataset [1]. For fixation prediction application, the results show that our saliency location map performs comparable to the high-level features, but requires much less computation time.
Visual attention based bag-of-words model for image classification
Qiwei Wang, Shouhong Wan, Lihua Yue, et al.
Bag-of-words is a classical method for image classification. The core problem is how to count the frequency of the visual words and what visual words to select. In this paper, we propose a visual attention based bag-of-words model (VABOW model) for image classification task. The VABOW model utilizes visual attention method to generate a saliency map, and uses the saliency map as a weighted matrix to instruct the statistic process for the frequency of the visual words. On the other hand, the VABOW model combines shape, color and texture cues and uses L1 regularization logistic regression method to select the most relevant and most efficient features. We compare our approach with traditional bag-of-words based method on two datasets, and the result shows that our VABOW model outperforms the state-of-the-art method for image classification.
Sea ice drift tracking in the Bohai Sea based on optical flow
Qing Wu, Wenhui Lang, Xi Zhang, et al.
In the Bohai Sea, sea ice drifting is hardly tracked due to the highly sea motion. The long satellite repeat cycles in the polar region are not suitable to the ice drift tracking in the Bohai Sea. The unique characteristics of the Geostationary Ocean Color Imager (GOCI) allow the tracking of sea ice drift on a daily basis with the use of 1-hour time intervals images (eight images per day). The optical flow method is applied to track the sea ice drift in the Bohai Sea. Experiments have shown that the sea ice vectors from the optical flow method are agreement well with the manually selected reference data.
Blooming flower recognition by using eigenvalues of shape features
Wooi-Nee Tan, Racheal Sem, Yi-Fei Tan
This paper introduces the concept of eigenvalues in describing the shape features of blooming flowers, and implements the idea in recognizing the blooming flowers automatically. The input images of blooming flowers are taken from natural scene in the form of RGB images. The proposed method first segments and crops the targeted flower object, then calculates four shape features to form a 2 × 2 matrix. Eigenvalues of the matrix computed from the testing set are then used to compare with those eigenvalues of the reference set. The advantage of utilizing the idea of eigenvalues is that the dimension of parameters used in comparison can be reduced. Based on the experimental result on database which consists of 5 types of flowers with a total of 46 images, a recognition rate of 80.43% is achieved.
X-ray image retrieval system based on visual feature discrimination
Heelah A. Alraqibah, Ouiem Bchir, Mohamed Maher Ben Ismail
In this paper, we propose a medical content based image retrieval system based on efficient discrimination between visual descriptors within each image category. In addition, the proposed approach reduces the search space during the retrieval phase by incorporating an unsupervised learning and feature weighing component. We use a collection of X-ray images from ImageCLEF2009 data set in order to assess the performance of the system. The obtained results show that the proposed approach is faster than typical content based image retrieval.
PCNN document segmentation method based on bacterial foraging optimization algorithm
Yanping Liao, Peng Zhang, Qiang Guo, et al.
Pulse Coupled Neural Network(PCNN) is widely used in the field of image processing, but it is a difficult task to define the relative parameters properly in the research of the applications of PCNN. So far the determination of parameters of its model needs a lot of experiments. To deal with the above problem, a document segmentation based on the improved PCNN is proposed. It uses the maximum entropy function as the fitness function of bacterial foraging optimization algorithm, adopts bacterial foraging optimization algorithm to search the optimal parameters, and eliminates the trouble of manually set the experiment parameters. Experimental results show that the proposed algorithm can effectively complete document segmentation. And result of the segmentation is better than the contrast algorithms.
An improved reversible data hiding algorithm based on modification of prediction errors
Iyad F. Jafar, Sawsan A. Hiary, Khalid A. Darabkh
Reversible data hiding algorithms are concerned with the ability of hiding data and recovering the original digital image upon extraction. This issue is of interest in medical and military imaging applications. One particular class of such algorithms relies on the idea of histogram shifting of prediction errors. In this paper, we propose an improvement over one popular algorithm in this class. The improvement is achieved by employing a different predictor, the use of more bins in the prediction error histogram in addition to multilevel embedding. The proposed extension shows significant improvement over the original algorithm and its variations.
Robust real time extraction of plane segments from time-of-flight camera images
Yosef Dalbah, Dirk Koltermann, Friedrich M. Wahl
We present a method that extracts plane segments from images of a time-of-flight camera. Future driver assistance systems rely on an accurate description of the vehicle’s environment. Time-of-flight cameras can be used for environment perception and for the reconstruction of the environment. Since most structures in urban environments are planar, extracted plane segments from single camera images can be used for the creation of a global map. We present a method for real time detection of planar surface structures from time-of-flight camera data. The concept is based on a planar surface segmentation that serves as the fundament for a subsequent global planar surface extraction. The evaluation demonstrates the ability of the described algorithm to detect planar surfaces form depth data of complex scenarios in real time. We compare our methods to state of the art planar surface extraction algorithms.
Exposing photo manipulation using geometry and shadows
Jiangbin Zheng, Xuemei Song, Jinchang Ren, et al.
It is increasingly easier to manipulate digital images by the sophisticated photo editing software. Often visual inspection cannot definitively distinguish manipulation from authentic images. This paper introduces a forensic technique that focuses on geometric and shadow color inconsistencies which arise when fake objects with shadows are inserted into an image or an object with its shadow in the image are modified. This paper analyzes three underlying geometric relations and shadow color characteristic constrains that occur in image scene. In particular, (i) explore the property of vanishing point in linear perspective project, and evaluate the geometric consistent level of the image based on the uncertain degree of vanishing point;(ii) analyze the relation between illuminated object and its cast shadow which are modeled by the planar homology and use this constrain to estimate the image’s geometric consistent level;(iii) locate tempered region by measure the K-L divergence between shadow pairs. Visually plausible forgery images demonstrate the performance of our proposed method.
Hardware architecture for Fast 2D distance transformations
Vasiliki Giannakopoulou, Kostas Masselos
Distance transformation algorithms approximations use distance transformation with small neighborhood pixels, defined by masks. There are many different DT approximations applied, and some of them are proposed to be implemented in custom hardware, exploiting the possibilities offered for accelerating the actions required in the proposed algorithm. In this paper, a custom hardware architecture is proposed in order to implement fast 2D distance transformations that are independent of the distance function used. The current implementation concerns the Euclidean distance transform and Kintex7 evaluation board is used for this purpose. Area and synthesis results are presented for various image sizes, while it is observed a speed increase of 98% against the X86 architecture.
FFT algorithm of complex exponent moments and its application in image recognition
Ziliang Ping, Yongjing Jiang, Suhua Zhou, et al.
Orthogonal multi-distorted invariant Complex Exponent Moments (CEMs) are proposed. A fast and accurate 2-D Fast Fourier Transform (FFT) algorithm is used to calculate CEMs. Theoretical analysis is presented to demonstrate the multi-distorted invariant property of CEMs. The proposed method is applied in the pattern recognition of human faces, English letters and Chinese characters. Experimental results show that CEMs have higher quality and lower computational complexity than RHFMs in image reconstruction and pattern recognition.
User experience interaction design for digital educational games
Jiugen Yuan, Wenting Zhang, Ruonan Xing
Leading the elements of games into education is the newest teaching concepts in the field of educational technology, which is by using healthy games to impel and preserve the learner’s motivation, improve the learning efficiency and bring one experience in learning something by playing games. First of all, this article has introduced the concept of Digital Game and User Experience and brought the essence of Digital Game to light to construct the frame of user experience interaction design for digital educational games and offer one design idea for the development of related products and hoping that Digital Game will bring us continuous innovation experience.
An automatic registration method based on runway detection
Xiuqiong Zhang, Li Yu, Guo Huang
Runway is seen distinctly that is a crucial condition in the process of approaching and landing. One of the enhanced vision methods is image fusion method between the infrared and visible images in EVS (Enhanced Vision System). The image registration plays a very important role in image fusion. So, an automatic image registration method is proposed based on the accurate runway detection. Firstly, runway is detected using DWT (discrete wavelets transform) from the infrared and visible images respectively. Then, a fitting triangle is constructed according to the edges of runway. The corresponding feature points extracted from the middle points of edges and the centroid of triangle are used to compute the transform parameters. The results of registration are more accurate and efficient than those of registration based on mutual information. This method is robust and has less computation which can be applied to real-time system.
On digital image processing technology and application in geometric measure
Jiugen Yuan, Ruonan Xing, Na Liao
Digital image processing technique is an emerging science that emerging with the development of semiconductor integrated circuit technology and computer science technology since the 1960s.The article introduces the digital image processing technique and principle during measuring compared with the traditional optical measurement method. It takes geometric measure as an example and introduced the development tendency of digital image processing technology from the perspective of technology application.
Color encryption scheme based on adapted quantum logistic map
Alaa Zaghloul, Tiejun Zhang, Mohamed Amin, et al.
This paper presents a new color image encryption scheme based on quantum chaotic system. In this scheme, a new encryption scheme is accomplished by generating an intermediate chaotic key stream with the help of quantum chaotic logistic map. Then, each pixel is encrypted by the cipher value of the previous pixel and the adapted quantum logistic map. The results show that the proposed scheme has adequate security for the confidentiality of color images.
Nonlinear interrelation of chaotic time series with wavelet transform and recurrence plot analyses
Two relatively advanced and powerful analysis methods, i.e. coherence-wavelet transform and cross-recurrence plot, which are used to probe the nonlinear interrelation between different time series, have been applied to non-stationary time series in this paper. The case study uses the chaotic time series of astronomical observational data for the time interval from January 1966 to December 2010. We examined the phase dynamical properties between two data sets and found that the availability of a physically meaningful phase definition depends crucially on the appropriate choice of the reference frequencies. Furthermore, their phase shift is not only time-dependent but also frequency-dependent. We conclude that advanced nonlinear analysis approaches are more powerful than traditional linear methods when they are applied to analyze nonlinear and non-stationary dynamical behavior of complex physical systems.
Robust patch-based tracking via superpixel learning
Qianwen Li, Yue Zhou
Aimed at tracking non-rigid objects with geometric appearance changes over time, we propose a novel patch-based appearance model to adapt to the changes of topology. Meanwhile, as an effective online updating scheme, superpixel learning is adopted to select and update the patches when a new frame arrives. We build a foreground-background vote map via superpixels to determine the confidence of the patches in case of drifting. Experimental results show the proposed approach enables tracking non-rigid targets robustly and accurately.
Mode-dependent pixel-based weighted intraprediction for HEVC scalable extension
Tang Kha Duy Nguyen, Chun-Chi Chen
The current draft scalable extension to HEVC offers two approaches, RefIdx and TextureRL, for performing inter-layer prediction. In the framework of TextureRL, this paper presents a mode-dependent pixel-based weighted intra prediction scheme for coding the enhancement layer (EL). The scheme first decomposes the EL intra prediction and the collocated base layer reconstructed block into their respective DC and AC components and then computes a weighted sum of both to form a better prediction signal using a pixel-based weighting scheme. The weighting factors to associate with different components are obtained by a least-squares fit to the training data. It was observed that they depend strongly on the EL's intra prediction mode and prediction block size, but are less dependent on QP settings. The experimental results show an average BD-rate savings of 1.0% for the AI-2x configuration and 0.5% for AI-1.5x over the SHM-1.0 anchor.
Robust non-parametric probabilistic image processing for face recognition and pattern recognition
Meropi Pavlidou, George Zioutas
Face Recognition has been a pattern recognition application of great interest. Many mathematical models have been used for face recognition and among them probabilistic methods However, up to now probabilistic methods rely heavily on the number of training data and do not fully exploit the 2-dimensional information of the images, both the training and the testing sets. In this paper’s method a new 2-D robust probabilistic method of transforming the principal components of the initial image data, allowing support vector machines to efficiently capture the inference between images. This new algorithm encodes every image with the help of Robust Kernel non Parametric Estimation and in the second stage uses Support Vector Machines to classify this encoded information. Results exhibit that Non Parametric Estimation of the Probability Function of the image highlights the unique characteristics of each person making it easier for classifiers to group those instances and efficiently perform the classification of the images and thus leading to better results compared to up to date methods for face recognition.
Analyzing visual enjoyment of color: using female nude digital Image as example
This research adopts three primary colors and their three mixed colors as main color hue variances by changing the background of a female nude digital image. The color saturation variation is selected to 9S as high saturation and 3S as low saturation of PCCS. And the color tone elements are adopted in 3.5 as low brightness, 5.5 as medium brightness for primary color, and 7.5 as low brightness. The water-color brush stroke used for two female body digital images which consisting of a visual pleasant image with elegant posture and another unpleasant image with stiff body language, is to add the visual intimacy. Results show the brightness of color is the main factor impacting visual enjoyment, followed by saturation. Explicitly, high-brightness with high saturation gains the highest rate of enjoyment, high-saturation medium brightness (primary color) the second, and high-brightness with low saturation the third, and low-brightness with low saturation the least.
Development of software for airborne photos analysis
J. Rudowicz-Nawrocka, R. J. Tomczak, K. Nowakowski, et al.
Systems type UAV / UAS enable acquisition of huge amounts of data, such as images. For their storage and analysis IT systems are necessary. Existing systems do not always allow you to perform such operations as researchers wish to [1]. The purpose of the research is to automate the process of recognizing objects and phenomena occurring on grasslands. The basis for action are numerous collections of images taken from the oktokopter [2]. For the purpose of the collection, management and analysis of image data and character acquired in the course of research, in accordance with the principles of software engineering several computer programs has been produced. The resulting software is different functionality and type. Applications were made using a number of popular technologies. The choice of so many technology was primarily dictated by the possibilities of their use for specific tasks and availability on different platforms and the ability to distribute open source. Applications presented by the authors, designed to assess the status of grassland based on aerial photography, show the complexity of the issues but at the same time tend to further research.