br Table Distribution of mammographic density BIRADS
Table 1. Distribution of mammographic density (BIRADS ratings) for two groups of the cases in dataset.
Characteristic Malignant cases Benign cases
Figure 1. Examples of malignant and benign cases with four view display. a) Malignant case. b) Benign case.
Our proposed CAD scheme was developed in the following three steps namely, feature computation, feature selection and case classification. We first built an initial feature pool containing four different groups of features. Next, a particle swarm optimization (PSO) algorithm was applied to select optimal features so that redundant features can be removed from the feature pool. Finally, a popular machine learning classifier namely support vector machine (SVM) was used to predict the risk or likelihood of a case being malignant. The CAD scheme was implemented in MATLAB software environment.
As shown in figure 2, we used four images (namely, RCC, LCC, RMLO, and LMLO) of CC and MLO view at the left (L) side and right (R) side. Before building feature pools, our CAD scheme first segmented the whole breast area out in each image by removing all possible artifacts or markers outside the breast areas (assigning all background pixels to 0 as shown in black color in figure 2). The computed image features from the segmented whole breast area can be categorized into four groups. The first group includes 17 statistical image features describing breast area shape and density distribution (as shown in Table 2). These statistical features are widely used to quantify pixel value distribution and its heterogeneity in the 2D image [18, 19]. The other three feature groups are block-based Fast Fourier Transform (FFT) features, Discrete Cosine Transform (DCT) features and Wavelet Transform (WT) features, which aim to help detect and analyze local breast tissue distributions in an image. For this ZD1839 purpose, each target image was first divided into blocks with a scanning window size of 8×8 or 9×9, which is determined based on our previous investigation . Then, FFT, DCT and WT were applied on these blocks to construct three feature matrices, in each of which 14 features were computed (Table 2). More details about feature extraction can be found in the supplemental materials. Finally, three groups of FFT, DCT and WT based features (3×14=42 features) and Shape&Density group (17 features) were combined, so that CAD scheme computed 59 initial features from each image.
Next, two feature pools were built with different view images. The first one used all the four images (namely, RCC, LCC, RMLO, and LMLO) of CC and MLO view at the left (L) breast and right (R) breast (as shown in figure 2). In each case, 59 previously mentioned image features were computed separately from each of the four-view images. We can organize each 59 features into a separate vector, so that each case has four feature vectors. A final vector (e.g. feature pool) was generated by adding corresponding or matched features computed from four-view images together. Such feature generating process is demonstrated in figure 2. Similar to computing the first feature vector with four-view images, the second pool includes feature vectors that were computed using only two positive view images (e.g., LCC and LMLO view images of one breast). Two images of negative breast were ignored. Finally, these two feature pools and vectors were applied as input to train two machine learning classifiers embedded with the feature selection algorithm separately.
Table 2. List of four feature groups
Feature Group Description
Shape&Density Mean, Std, Convexity, MeanGradient, StdGradient,
Skewness, Kurtosis, Energy, Entropy, Max, Min, Median, Range,
RMS, MeanDeviation, Uniformity, Correlation
Block-based features Mean, Std, Max, Min, Median, Range, RMS, Energy,
(FFT, DCT and WT Entropy, Skewness, Kurtosis, MeanDeviation, Uniformity,
Figure 2. Feature extraction considering four-view images of breast area
2.2.2. Machine learning classifier and performance assessment
Using the created feature vector, a machine learning classifier is applied to generate the optimal feature cluster and predict the likelihood of the case being malignant. Although many different machine learning classifiers can be used for this purpose , SVM classifier uses a constructive machine learning process based on the statistical learning theory to minimize the generalization error , which is considered a quite robust classifier applied to the relatively small training datasets and has been used in many biomedical engineering applications [23, 24]. As a result, we adopted SVM classifier in our application, which is built based on the SVM tool box under MATLAB environment.