Biomedical Image Processing / Medical Image Processing
Parisa Gifani; Hamid Behnam; Zahra Alizadeh Sani
Volume 4, Issue 2 , June 2010, , Pages 149-160
Abstract
Dimensionality reduction is an important task in machine learning, to simplify data mining, image processing, classification and visualization of high-dimensional data by mitigating undesired properties of high-dimensional spaces. Manifold learning is a relatively new approach to nonlinear dimensionality ...
Read More
Dimensionality reduction is an important task in machine learning, to simplify data mining, image processing, classification and visualization of high-dimensional data by mitigating undesired properties of high-dimensional spaces. Manifold learning is a relatively new approach to nonlinear dimensionality reduction. Algorithms for manifold learning are based on the intuition that the dimensionality of many data sets may be artificially high and each data point can be described as a function of only a few underlying parameters. Using this tool, intrinsic parameters of the system database, which are main distinction factors of data sets, are recognized and all of them lie on a manifold that shows the real relationship of parameters. One of the successful applications of these methods is in image analysis field. By this approach, each image is a data in high dimensional space that the pixels are its dimensions. Because echocardiography images obtained from a patient are different in quantitative parameters such as heartbeat periodic motion and noise, image sets are reduced to two-dimensional space by a proper manifold learning. In this article, after mapping echocardiography images in two-dimensional space, by using LLE and Isomap algorithms, similar images placed side by side and the relationships between the images according to the cyclic property of heartbeat became evident. The Results showed the weakness of Isomap algorithm and power of LLE algorithm in preserving the relation between consecutive frames. De-noising is an important application which extracted from this research.
Biomedical Image Processing / Medical Image Processing
Ali Taalimi; Emadoddin Fatemizadeh
Volume 4, Issue 3 , June 2010, , Pages 231-248
Abstract
Functional magnetic resonance imaging (fMRI) is widely used for investigation of brain neural activity. This imaging technique obtains signals and images from human brain’s response to prescheduled tasks. Several studies on blood oxygenation level-dependent (BOLD) signal responses demonstrate nonlinear ...
Read More
Functional magnetic resonance imaging (fMRI) is widely used for investigation of brain neural activity. This imaging technique obtains signals and images from human brain’s response to prescheduled tasks. Several studies on blood oxygenation level-dependent (BOLD) signal responses demonstrate nonlinear behavior in response to a stimulus. In this paper we investigate nonlinear modeling of BOLD signal activity to model the nonlinear and time variant behaviors of this physiological system. For this purpose two categories of nonlinear methods are considered, first those one with emphasis on physiological parameters which affect BOLD response and methods model the input and output of system without any refer to all the hidden state variables (physiological parameters. Balloon model is analzyed and a new approach for activation detection based on this model is introduced. In addition, the Hammerstein-Wiener, NARMA and Volterra kernels are investigated as nonlinear and nonphysiological methods and their ability in detection of activation detection are compared. The Activation detection methods have been applied on the two data sets (real and synthetic). For synthetic data and threshold equal to 0.45, the Jaccard index for Wiener- Hammerstein, NARMA, and Volterra model was 0.9, 1.0, and 0.91, respectively. In real dataset and for optimal threshold (0.35, 0.4, and 0.45) the same index was 0.85, 0.90, and 0.87, respectively.
Biomedical Image Processing / Medical Image Processing
Hamid Abrishami Moghaddam; Maryam Momeni; Kamran Kazemi; Reinhard Grebe; Fabrice Wallois
Volume 4, Issue 4 , June 2010, , Pages 337-360
Abstract
Diagnostic follow-up of the brain development during the neonatal period and childhood is an important clinical task. Any disturbance of this process can cause pathological deviations, especially if the baby is born premature. Recent advances in magnetic resonance imaging allow obtaining high-resolution ...
Read More
Diagnostic follow-up of the brain development during the neonatal period and childhood is an important clinical task. Any disturbance of this process can cause pathological deviations, especially if the baby is born premature. Recent advances in magnetic resonance imaging allow obtaining high-resolution images of the neonatal brain. After segmenting the brains they can be used to reconstruct and model changes occurring during neonatal brain development. In addition such near-realistic model of the head, including the skin, skull and brain can be used to solve the inverse problem of determining the sources of registered signals from electrical brain activity. Although there exist numerous methods and various modeling schemes for adults, these cannot be used directly for neonates due to important differences in morphology. In this review article, neonatal brain atlases are divided into three categories: individual atlases, probabilistic atlases and stochastic atlases. In the following, existing neonatal brain atlases are placed in this classification and their methods of construction are presented. Furthermore, strengths and weaknesses of those neonatal brain atlases are analyzed and finally future research trends in this area are explained.
Biomedical Image Processing / Medical Image Processing
Hossein Rabbani
Volume 3, Issue 1 , June 2009, , Pages 1-14
Abstract
In this paper, ultrasonic images are initially deblurred using Gradient method and then the estimations of image and point spread function (PSF) are improved using denoising techniques. For this reason, at first a criterion with appropriate regularizers (that results in preservation of the edges) is ...
Read More
In this paper, ultrasonic images are initially deblurred using Gradient method and then the estimations of image and point spread function (PSF) are improved using denoising techniques. For this reason, at first a criterion with appropriate regularizers (that results in preservation of the edges) is defined for the iterative Gradient method, then the estimation of PSF is improved using a denoising technique based on using an anisotropic window around each pixel. The initial estimation of image is also improved using a denoising method in complex wavelet domain that proposes maximum a posteriori (MAP) estimator and local Laplacian prior density function. Using these denoising methods on top of Gradient method causes that our algorithm reduces the visual artifacts and preserves the edges in the deblurred images. Our simulations show that the proposed method in this paper outperforms other methods visually and quantitatively.
Biomedical Image Processing / Medical Image Processing
Mohammad Hosein Miranbeigi; Leila Mohammadi; Sahar Moghimi; Giti Torkaman
Volume 3, Issue 1 , June 2009, , Pages 15-24
Abstract
Collagen content and its configuration are considered to be among important criteria of healing in tissues. Therefore, developing a method to estimate these factors can benefit physicians in terms of valuable information. In this paper, we examine variation of collagens in tissue mimicking phantoms as ...
Read More
Collagen content and its configuration are considered to be among important criteria of healing in tissues. Therefore, developing a method to estimate these factors can benefit physicians in terms of valuable information. In this paper, we examine variation of collagens in tissue mimicking phantoms as well as in vivo tissue taking advantage of applying image processing techniques on ultrasound images of samples. In phantoms, as the base tissue we have used agar-water matrix material and graphite to simulate collagen, respectively. We also have used different concentrations of graphite to simulate different contents of collagen according to attenuation coefficient of ultrasound waves in soft tissue and its correlation with weight ratio of graphite. Experimental and simulation results show that increase in concentration of graphite in phantoms results in higher energy and more contrast level in B-Mode images (r=0.99, p
Biomedical Image Processing / Medical Image Processing
Hamed Rakhshan; Hamid Behnam
Volume 3, Issue 1 , June 2009, , Pages 25-31
Abstract
Vibroacoustography is a relatively new elasticity imaging method that uses dynamic (oscillatory) radiation force of ultrasound to vibrate the tissue at low frequency (Kilo Hertz). The resulting acoustic emission is recorded with sensitive hydrophone to produce images that are related to the mechanical ...
Read More
Vibroacoustography is a relatively new elasticity imaging method that uses dynamic (oscillatory) radiation force of ultrasound to vibrate the tissue at low frequency (Kilo Hertz). The resulting acoustic emission is recorded with sensitive hydrophone to produce images that are related to the mechanical properties of the tissue. This force is produced by two continuous overlapping ultrasound beams that have a slightly different frequency. Vibroacoustography has been applied to image breast and arteries microcalcification. The lateral resolution of this imaging method is about 0.7mm and its axial resolution is about 12 mm. In this paper two major methods of producing dynamic radiation force, Confocal and X-focal (consists of two concave transducers whose axes cross at their foci at an angle q), are analyzed. A new method for improving axial resolution using short duration pulses is introduced. Simulation results show that we have about 50% improvement in axial resolution using short duration pulses.
Biomedical Image Processing / Medical Image Processing
Babak Mohammadzadeh Asl; Ali Mahloojifar
Volume 3, Issue 1 , June 2009, , Pages 33-46
Abstract
In recent years, adaptive beam forming methods have been successfully applied to medical ultrasound imaging, resulting in significant improvement in image quality compared to non-adaptive beam formers. This improvement results from the fact that their weights are chosen based on the priori knowledge ...
Read More
In recent years, adaptive beam forming methods have been successfully applied to medical ultrasound imaging, resulting in significant improvement in image quality compared to non-adaptive beam formers. This improvement results from the fact that their weights are chosen based on the priori knowledge of the received data and updated using current statistics of the array signal. Most of the adaptive beam formers presented in the ultrasound imaging literature are based on the minimum variance (MV) beam former, which can improve the imaging resolution while retaining the contrast. It is desirable that the beam former could improve the resolution and contrast, at the same time. To this end, in this paper, we have used temporal averaging besides the conventional spatial averaging to estimate the more accurate covariance matrix. Moreover, we have used the coherence factor weighting combined with MV beam forming to enhance the focusing quality and hence reducing the undesired side lobes. The efficacy of the proposed adaptive beam forming approach is demonstrated via a number of simulated and experimental examples.
Biomedical Image Processing / Medical Image Processing
Bahram Momen Mehrabani; Mohammad Javad Abolhassani; Alireza Ahmadian; Javad Alirezaie
Volume 3, Issue 1 , June 2009, , Pages 47-54
Abstract
The main purpose of this work is introducing a novel method of temperature monitoring using B-Mode Ultrasound digital images. Thermal dependence of sound speed causes a virtual displacement of scatterer particles. The virtual displacement is computed using speckle tracking methods. Horn-Shunck algorithm ...
Read More
The main purpose of this work is introducing a novel method of temperature monitoring using B-Mode Ultrasound digital images. Thermal dependence of sound speed causes a virtual displacement of scatterer particles. The virtual displacement is computed using speckle tracking methods. Horn-Shunck algorithm was applied to a tissue mimicking phantom to measure the virtual displacement. A heating resistor was used in this phantom to generate temperature elevation. The DICOM ultrasound images were acquired using commercial SIMENES ultrasound imaging system with 10MHz linear probe. The accuracy of noninvasive temperature estimation was measured comparing with invasive temperature measurement. The phantom is warmed up to the 8. The mean error of temperature estimation was found to be 0.4°C and peak error 0.9°C. Fast temperature estimation can be achieved using Optical-Flow methods. This Method is a differential based motion estimation method that estimates displacement by calculating the optical pattern changes caused by movements between two frames. Noise sensitivity is the main infirmity of Horn-Schunck method.
Biomedical Image Processing / Medical Image Processing
Meysam Torabi; Emadoddin Fatemizadeh
Volume 3, Issue 3 , June 2009, , Pages 213-225
Abstract
In this paper, an MRI-based diagnosing approach has been proposed which simultaneously analyzes T1-MR and T2-MR images. The dataset contains 120 cross-sectional images of abnormal and also normal brains as control group. Due to inherent proprieties of T1 and T2 images and their principal differences, ...
Read More
In this paper, an MRI-based diagnosing approach has been proposed which simultaneously analyzes T1-MR and T2-MR images. The dataset contains 120 cross-sectional images of abnormal and also normal brains as control group. Due to inherent proprieties of T1 and T2 images and their principal differences, particular features have been extracted from each image. Then, more meaningful data has been structured by automatically eliminating redundant data and generating a semi-linear combination of the remaining features. Considering the fact that Alzheimer's disease mainly damages the gray and white matter of the brain and knowing that these parts of the brain can be more clearly observed in T1 images, the classifier which works under a nonlinear structure, allocates more weight for processing the T1 images comparing to T2 image. The images, after being registered, have been processed in two groups of training and test sets. According to the results, three forth of the dataset which was obtained from Harvard University's dataset (The Whole Brain Atlas) has been correctly diagnosed.
Biomedical Image Processing / Medical Image Processing
Mohammad Aboonajmi; Asadollah Akram; Seyed Kamaloddin Setarehdan; Ali Rajabipour
Volume 3, Issue 1 , June 2009, , Pages 55-65
Abstract
Ultrasound is a rapidly growing tool in the field of research, which shows an increasing use in the food industry for both analysis and modification of food products. Quality assessment of agricultural material has an important role in modern agriculture. This study demonstrates the possibility of non-destructive ...
Read More
Ultrasound is a rapidly growing tool in the field of research, which shows an increasing use in the food industry for both analysis and modification of food products. Quality assessment of agricultural material has an important role in modern agriculture. This study demonstrates the possibility of non-destructive prediction of the main quality indices of the commercial eggs by processing a short ultrasound burst passing through the egg material and calculating the ultrasound phase velocity. For this purpose a set of three hundred samples of commercial eggs (Boris Brown, 33 weeks age) from the first day of egg lying were purchased from a farm and classified in two groups. The first group was kept in the room temperature (22-25°C) while the second group was kept within the refrigerator (4-5°C). 25 eggs were picked every week from each groups (room and refrigerator) were first subjected to the nondestructive ultrasound test at room temperature. Each day, the ultrasound signal is recorded from the eggs first. Then, immediately after that, the air cell, the thick albumen heights, the Haugh unit and the yolk index of the eggs were also determined destructively for comparison purposes. Significant differences at 5% level between the means of the destructive analysis at different days of storage of the eggs were found using ANOVA. Both the Haugh unit and yolk index decreased by time over 5 weeks in storage at room and refrigerator while the air cell height increased. The lower is the Haugh unit for the eggs in the refrigerator the lower is the phase velocity (1573 m/s at first day compared to 1540 m/s after 3 weeks). Similar changes of the phase velocity are found for the eggs in the room temperature (1571 m/s at first day compared to 1514 m/s after 3 weeks).
Biomedical Image Processing / Medical Image Processing
Nojtaba Hajihasani; Yaghoub Farjami; Bijan Vosoughi Vahdat; Jahangir Tavakoli
Volume 3, Issue 1 , June 2009, , Pages 67-77
Abstract
Increasing number of diagnostic and therapeutic applications of finite amplitude ultrasound in medicine and biology has motivated researchers toward more accurate modeling and more efficient simulation of nonlinear ultrasound regime. One of the most widely used nonlinear models for propagation of 3D ...
Read More
Increasing number of diagnostic and therapeutic applications of finite amplitude ultrasound in medicine and biology has motivated researchers toward more accurate modeling and more efficient simulation of nonlinear ultrasound regime. One of the most widely used nonlinear models for propagation of 3D diffractive sound beams in dissipative media is the KZK (Khokhlov, Kuznetsov, Zabolotskaya) parabolic nonlinear wave equation. Various numerical algorithms have been developed to solve the KZK equation. Generally, these algorithms fall into one of the three main categories: frequency domain, time domain and combined time-frequency domain. The intrinsic parabolic approximation in the KZK equation imposes limiting accuracy in the solution to the diffraction term of the KZK equation particularly for field points close to the source or in far off-axis region. In this work we developed a novel generalized time domain numerical algorithm to solve the diffraction term of the KZK equation. The algorithm solves the Laplacian operator of the KZK equation in the 3D Cartesian coordinates using novel 5-point Implicit Backward Finite Difference (IBFD) and 5-point Crank-Nicolson Finite Difference (CNFD) techniques. This leads to a more uniform discretization of the Laplacian operator which in turn results in a more accurate solution to the diffraction term in the KZK equation. Comparison between results obtained with the new algorithm and the previously-published data for rectangular ultrasound sources is presented.
Biomedical Image Processing / Medical Image Processing
Azar Tolouee; Hamid Abrishami Moghaddam; Masoume Giti
Volume 2, Issue 3 , June 2008, , Pages 179-189
Abstract
Automatic classification of lung tissue patterns in high-resolution computed tomography (HRCT) images of patients affected with interstitial lung diseases (ILD) is an important stage in the construction of a computer-aided diagnosis system. In this study, classification of Jung tissue patterns was conducted ...
Read More
Automatic classification of lung tissue patterns in high-resolution computed tomography (HRCT) images of patients affected with interstitial lung diseases (ILD) is an important stage in the construction of a computer-aided diagnosis system. In this study, classification of Jung tissue patterns was conducted using a new machine learning approach. The proposed system comprises three stages. In the first stage, the parenchyma region in HRCT lung images is separated using a set of thresholding, filtering and morphological operators. In the second stage, two sets of overcomplete wavelet filters, namely discrete wavelet frames and rotated wavelet frames are utilized to extract the features from the defined regions of interest (ROJs) within parenchyma. Then, in the third stage, the fuzzy k-nearest neighbor algorithm is employed to perform the pattern classification. Our experiments in lung pattern classification were rendered on four different lung tissue patterns (ground glass, honey combing, reticular, and normal) selected from a database of 340 images from 17 subjects. After applying the technique to classify these patterns in small ROis, we extended the classification scheme to the whole lung in order to produce the quantitative scores of abnormalities in lung parenchyma of the patients. The performance of the proposed method was compared with two state-of-the-art computer based methods for lung tissue characterization. It was also validated against the experienced observers. The average kappa statistic of agreement between two radiologists and the computer was found to be 0.6543 where as the average kappa statistic for the interobserver agreement was 0.6848. This computer system can approach the performance of the expert observers in the diagnosing regions of interest and can help to produce objective measures of abnormal patterns in lung HRCT images.
Biomedical Image Processing / Medical Image Processing
Emadoddin Fatemizadeh; Parisa Shooshtari
Volume 2, Issue 3 , June 2008, , Pages 191-201
Abstract
Nowadays due to the huge capacity and bandwidth essentials for medical images, communications and storage purposes, medical images compression is one of most important concepts in this area. Error free compression techniques have the weakness of low compression ratio. On the other hand, lossy techniques ...
Read More
Nowadays due to the huge capacity and bandwidth essentials for medical images, communications and storage purposes, medical images compression is one of most important concepts in this area. Error free compression techniques have the weakness of low compression ratio. On the other hand, lossy techniques with high compression ratio result in low quality of the images. In recent years, some special compression schemes have been suggested by splitting the original image into two regions: Region of Interest (ROI) with lossless compression and the Region of Background (ROB) with lossy compression and a lower quality. In this paper, we proposed a novel selective compression approach to compress 3D brain MR images. For this purpose, an adaptive mesh for the first slice was designed and estimation of the gray levels of the next slices was performed through deformations of the mesh elements. After residual image determination, the error between the original image and the approximated image was transformed to the wavelet domain using a region-based discrete wavelet transform (RBDWT). Finally, the wavelet coefficients were coded by an object-based SPIHT coder.
Biomedical Image Processing / Medical Image Processing
Mehdi Marsousi; Javad Alirezaie; Armen Kocharian
Volume 2, Issue 3 , June 2008, , Pages 203-214
Abstract
In this paper, a new method for boundary detection of left ventricle in echocardiography images is proposed. We have modified B-Spline Snake algorithm to achieve much faster convergence and more reliability toward noises in echocardiography images. A novel approach for inserting new node points during ...
Read More
In this paper, a new method for boundary detection of left ventricle in echocardiography images is proposed. We have modified B-Spline Snake algorithm to achieve much faster convergence and more reliability toward noises in echocardiography images. A novel approach for inserting new node points during iterations is applied to maintain a maximum distance between two adjacent nodes. This strategy is applied in order to simultaneously increase the smoothness of the contour and optimize the computational time. A multi-resolution strategy is also adapted to provide further robustness toward noises in the images. In addition, morphological operators are utilized to specify the initial contour automatically within the left ventricle chamber in echocardiography images. The parameters of node points are determined during each transition from coarser to finer resolution according to the average intensity of the sample points on the contour near each node point. The volumes of left ventricle in the end of both systolic and diastolic frames are calculated using modified Simpson method. The ejection fraction ratio is also calculated; this is frequently used by specialist before each surgery. Moreover, a method is introduced to draw the 3D model of left ventricle with the aid of basis function of B-Spline. The proposed method is assessed by comparison between the obtained results and clinical observations by expert radiologists and demonstrates a high accuracy.
Biomedical Image Processing / Medical Image Processing
Saeed Kermani; Hamid Abrishami Moghaddam; Mohammad Hasan Moradi
Volume 2, Issue 3 , June 2008, , Pages 215-231
Abstract
This paper presents a new method for quantification analysis of left ventricular performance from the sequences of cardiac magnetic resonance imaging using the three-dimension active mesh model (3DAMM). AMM is composed of topology and geometry of L V and associated elastic material properties. The ...
Read More
This paper presents a new method for quantification analysis of left ventricular performance from the sequences of cardiac magnetic resonance imaging using the three-dimension active mesh model (3DAMM). AMM is composed of topology and geometry of L V and associated elastic material properties. The LV deformation is estimated by fitting the model to the initial sparse displacements which is measured by a new establishing point correspondence procedure. To improve the model, a new shape-based interpolation algorithm was proposed for reconstruction of the intermediate slices. The proposed approach is capable of estimating the displacement field for every desired point of the myocardial wall. Then it leads to measure dense motion field and the local dynamic parameters such as Lagrangian strain. To evaluate the performance of the proposed algorithm, eight image sequences (six real and two synthetic sets) were used and the findings were compared with those reported by other researchers. For synthetic image sequence sets, the mean square error between the length of motion field estimated by the Algorithm and the analytical values was less than 0.5 mm. The results showed that the strain measurements of the normal cases were generally consistent with the previously published values. The results of analysis on a patient data set were also consistent with his clinical evidence. In conclusion, the results demonstrated the superiority of the novel strategy with respect to our formerly presented algorithm. Furthermore, the results are comparable to the current state-of-the-art methods.
Biomedical Image Processing / Medical Image Processing
Raheleh Kafieh; Alireza Mehri Dehnavi; Saeed Sadri; Seyed Hamid Raji
Volume 2, Issue 3 , June 2008, , Pages 233-246
Abstract
Cephalometry is the scientific measurement of head dimensions to predict craniofacial growth, plan treatment and compare different cases. There have been many attempts to automate cephalometric analysis with the aim of reducing the time required to obtain an analysis, improve the accuracy of landmark ...
Read More
Cephalometry is the scientific measurement of head dimensions to predict craniofacial growth, plan treatment and compare different cases. There have been many attempts to automate cephalometric analysis with the aim of reducing the time required to obtain an analysis, improve the accuracy of landmark identification and reduce the errors due to clinician subjectivity. This paper introduces a method for automatic landmark detection on cephalograms. We introduced a combination of model-based methods and neural networks on cephalograms. For this purpose, first some feature points were extracted using a nonlinear diffusion filter and Susan Edge Detector to model the size, rotation, and translation of skull. A neural network was used to classify the images according to their geometrical specifications. Using learning vector quantization (L VQ) for every new image, the possible coordinates of landmarks were estimated. Then a modified active shape model (ASM) was applied and a local search to find the best match to the intensity profile was used and every point was moved to get the best location. Finally, a sub-image matching procedure was applied to pinpoint the exact location of each landmark. In order to evaluate the results of this method, 20 randomly selected images were used with a drop-one-out method. Each image had a dimension of about 170x200 mm, digitized in 100 dpi (4 pixel == 1mm). On average, 24% of the 16% landmarks were within 1mm of correct coordinates, 61 percent within 2 mm, and 93 percent within 5 mm. the proposed method in this study has had a distinct improvement over the other proposed methods of automatic landmark detection.
Biomedical Image Processing / Medical Image Processing
Poune Roshani Tabrizi; Reza Aghaeizade Zoroofi
Volume 2, Issue 3 , June 2008, , Pages 247-266
Abstract
Drowsiness detection is vital in preventing traffic accidents. In this project, we propose three new algorithms for pupil and iris detection, lips localization and eyes state analysis, which we incorporate into a four step system for drowsiness detection: face detection, drowsiness parameters extraction ...
Read More
Drowsiness detection is vital in preventing traffic accidents. In this project, we propose three new algorithms for pupil and iris detection, lips localization and eyes state analysis, which we incorporate into a four step system for drowsiness detection: face detection, drowsiness parameters extraction from eyes, drowsiness parameter extraction from mouth and drowsiness level determination. Many current efforts, which are based on face analysis, focus only on using a single visual cue to characterize driver's state of alertness. This approach that relies on a single visual cue may encounter difficulty when the required visual features cannot be acquired accurately or reliably. There are few systems that use several visual cues to characterize driver's state of alertness. These systems are based on IR illuminators or training data. IR illuminators can be hazardous to eye health. Thus, our proposed system determines drowsiness level using a combination of several visual cues and contextual information. Also, it requires no training data at any step or IR illuminators. We analyzed and compared different parts of the systems with other methods using IMM, HCE, CVL database and 30 video sequences in two drowsy and active states from 15 persons. Finally, we achieved excellent drowsiness level results from the study population. We determined drowsiness level as follows: 1. The eyes and mouth state (detecting whether they were open or closed) was analyzed as 94.3% and 95.1 %, respectively; 2. Drowsiness level was determined in different situations such as normal blinking, fast blinking, normal speaking, yawning and long eye closure and 3. The participants were given a warning message when the drowsiness level reached over the threshold of 0.95.
Biomedical Image Processing / Medical Image Processing
Ali Rafiei; Mohammad Hasan Moradi; Mohammad Reza Farzaneh
Volume 1, Issue 2 , June 2007, , Pages 83-93
Abstract
A new filter was designed and approved for speckle noise removal in sonography images. In this filter, a new idea is used by using neural network learning, fuzzy information and genetic algorithm optimization. The multi-layer perceptron neural network with binary weights is used in this filter. The neighborhood ...
Read More
A new filter was designed and approved for speckle noise removal in sonography images. In this filter, a new idea is used by using neural network learning, fuzzy information and genetic algorithm optimization. The multi-layer perceptron neural network with binary weights is used in this filter. The neighborhood window of each pixel is used as input statistical features to estimate the noise level. Then it is fuzzificated and justified by simple fuzzy rules. The membership function width and network weights are optimized by on-line genetic algorithm. The on-line algorithm contains one individual, defined as a queen. In this algorithm, the next generation is created by using only the mutation operator. The performance of this filter was compared with the other speckle noise reduction techniques such as the median and homomorphic Wiener filters. Indeed, our proposed method is able to effectively remove speckle noises while preserving the quality of fine details in the image data better than the other methods. In this system, two classic and on-line GAs are used. The classic algorithm includes 50 strings. The results showed that both of the algorithms are the same in terms of noise reduction but the classic one is slower than the other one.
Biomedical Image Processing / Medical Image Processing
Nader Riahi Alam; Reza Aghaeizade Zoroofi; Masoume Giti; Arian Deldari; Alireza Ahmadian
Volume 1, Issue 3 , June 2007, , Pages 157-165
Abstract
In this study, the need of a CAD system and its capabilities has been investigated and then a sample program for a mammographic CAD system proper to Iranian tropical patients was designed. In the first step, the analog mammographic images were digitized by 56 and 112 mm spatial resolution and then were ...
Read More
In this study, the need of a CAD system and its capabilities has been investigated and then a sample program for a mammographic CAD system proper to Iranian tropical patients was designed. In the first step, the analog mammographic images were digitized by 56 and 112 mm spatial resolution and then were processed by the designed sample program. Analysis and technical details for designing and implementing the program included for following steps: The capability of the program image displayer consisting of viewing four mammographic images from four breast views (RCC, RMLO, LCC, LMLO) in one window, determining breast region by background removing and other conventional preprocessing application tools; Software processing tools including theresholding, histogram, ROI determination; Patient information fields such as clinical information, conventional reporting section as used in radiological department in Iran; Computer-aided diagnostic section including proper diagnostic processing algorithm to automatic detection of breast abnormality. For instance the application of wavelet and fuzzy logic for detecting malignant clusters of microcalcification. The introduced mammographic CAD system can provide the collection, organizing and the availability of the patient local information. Therefore by using the prepared database the evaluation of the sensitivity and specifity of the detecting algorithm for comparison of different research methods would be possible.
Biomedical Image Processing / Medical Image Processing
Jamal Esmaeilpour; Sattar Mirzakouchaki; Jalil Seyfali Harsini; Abdorrahim Kadkhoda Mohammadi
Volume 1, Issue 3 , June 2007, , Pages 167-176
Abstract
In this paper, the role of Vector Quantizer Neural Network in classification of six types of ECG signals has been investigated using the features that extracted from Daubechies6 Wavelet transformation. The six types of signals are: normal beat, left bundle branch block beat, right bundle branch block ...
Read More
In this paper, the role of Vector Quantizer Neural Network in classification of six types of ECG signals has been investigated using the features that extracted from Daubechies6 Wavelet transformation. The six types of signals are: normal beat, left bundle branch block beat, right bundle branch block beat, premature ventricular contraction paced beat and fusion of paced and normal beats. The required data were obtained from the MIT/BIH arrhythmia databases. By using the annotation files of the databases, the patterns of these six types of ECG signals were separated. Then, for better feature extraction, filtering and scaling on the patterns were applied. We used the energies of the last five detailed signals obtained from the exerting the Wavelet transformation in six levels, as the pattern features for Vector Quantizer Network training and testing. From each class, five hundred patterns were used for network training and one hundred patterns for testing. The results indicated %93.1 accuracy for six classes and above %94.3 for lesser than six classes. Then the rate of similarity and dissimilarity of the classes were considered. Finally, the results of this method were compared with some other methods in terms of accuracy.
Biomedical Image Processing / Medical Image Processing
Seyed Mohammad Shams; Gholam Ali Hossein-Zadeh; Mohammad Mehdi Karimi
Volume 1, Issue 1 , June 2007, , Pages 29-37
Abstract
In order to analyze the functional Magnetic Resonance Imaging (fMRI) data, the parameters of a nonlinear model for the hemodynamic system, so called Balloon model, were characterized and estimated. Two different approaches were applied to estimate these parameters. In the first step of both approaches, ...
Read More
In order to analyze the functional Magnetic Resonance Imaging (fMRI) data, the parameters of a nonlinear model for the hemodynamic system, so called Balloon model, were characterized and estimated. Two different approaches were applied to estimate these parameters. In the first step of both approaches, the voxels which show neural activity were identified. Then, the parameters of the balloon model for these active voxels were estimated by both steepest descent algorithm, and through genetic algorithm. Proposed approaches were applied on experimental fMRI data and the parameters of nonlinear Balloon model were estimated for different brain voxels. Accuracy of these characterizations was assessed via comparing the measured time series at each voxel with the modeled time series. Also, it was shown that the results of the parameter-estimation are consistent with the results obtained from system characterization via Volterra Kernels (which were reported in previous studies). It was concluded that the suggested approaches could accomplish a nonlinear system characterization through numerical methods, whereas they avoid theoretical complexities and they have acceptable speed (especially steepest descent algorithm).
Biomedical Image Processing / Medical Image Processing
Hadi Jafariani; Hamid Abrishami Moghaddam; Mohammad Shahram Moein
Volume 1, Issue 4 , June 2007, , Pages 311-318
Abstract
One of the most accurate techniques for human identification is based on the uniqueness of the retinal blood vessels pattern. In this paper, we present a new approach for human identification using retina image. This approach is insensitive to rotation, scaling and translation. The Fourier-Mellin transform ...
Read More
One of the most accurate techniques for human identification is based on the uniqueness of the retinal blood vessels pattern. In this paper, we present a new approach for human identification using retina image. This approach is insensitive to rotation, scaling and translation. The Fourier-Mellin transform coefficients and moments of the retinal image were used to extract the suitable features. To compensate the rotational effects caused by different relative positions of the retina scanner with respect to the eye, a rotation compensator was designed. For retinal image interpretation, the optic disc location was considered as a fixed and reference point. For its localization, the Haar wavelet and the Snakes model were used. The experimental results demonstrated an error rate close to zero for the proposed method.
Biomedical Image Processing / Medical Image Processing
Ladan Amini; Hamid Soltanian Zadeh; Caro Lucas; Masoume Giti
Volume -2, Issue 1 , July 2005, , Pages 17-34
Abstract
Based on a discrete dynamic contour model, a method for segmentation of brain structures like thalamus and red nucleus from magnetic resonance images (MRI) is developed. A new method for solving common problems in extracting the discontinuous boundary of a structure from a low contrast image is presented. ...
Read More
Based on a discrete dynamic contour model, a method for segmentation of brain structures like thalamus and red nucleus from magnetic resonance images (MRI) is developed. A new method for solving common problems in extracting the discontinuous boundary of a structure from a low contrast image is presented. External and internal forces deform the dynamic contour model. Internal forces are obtained from local geometry of the contour, which consist of vertices and edges, connecting adjacent vertices. The image data and desired image features such as image energy are utilized to obtain external forces. The problem of low contrast image data and unclear edges in the image energy is overcome by the proposed algorithm that uses several methods like thresholding, unsupervised clustering methods such as fuzzy C-means (FCM), edge-finding filters like Prewitt, and morphological operations. We also present a method for generating an initial contour for the model from the image data automatically. Evaluation and validation of the methods are conducted by comparing radiologist and automatic segmentation results. The average of the similarity between segmentation results is 0.8 for the left and right thalami indicating excellent performance of the new method. Additional noise and intensity inhomogeneity changed the evaluation results slightly illustrating the robustness of the proposed method to the image noise and intensity inhomogeneity.
Biomedical Image Processing / Medical Image Processing
Hamid Abrishami Moghaddam; Alireza Sheikh Hasani; Abbas Mostafa; Masoume Giti; Parviz Abdolmaleki
Volume -1, Issue 2 , June 2005, , Pages 117-128
Abstract
This paper presents a CAD system for detection and diagnosis of microcalcification clusters in mammograms. The proposed algorithm is composed of three main stages. In the first stage, the image pixels are examined for corresponding to individual microcalcification objects. For this purpose, the wavelet ...
Read More
This paper presents a CAD system for detection and diagnosis of microcalcification clusters in mammograms. The proposed algorithm is composed of three main stages. In the first stage, the image pixels are examined for corresponding to individual microcalcification objects. For this purpose, the wavelet transform of the image is computed. Then two wavelet coefficients as well as two statistical features are used with a neural network for a primary classification of the image pixels. In the second stage, some noisy pixels extracted by the first step are eliminated. Then 18 features defined for each microcalcification are used with a nonlinear classifier for accurate detection of microcalcifications. For training of this classifier we used 16 regions from a database containing 379 microcalcifications. Finally, in the third stage five features defined for each microcalcification cluster with a neural network are used to recognize malignant microcalcification clusters. For training of this network, 22 clusters including 8 malignant and 14 benign cases were used. The performance of the algorithm was evaluated using a separate image set composed of 22 clusters including 10 malignant and 12 benign cases. Using these tests images and the threshold value of 0.45, the sensitivity of the algorithm was 100% and its specificity was 91.6%.