Bonjour à tous,

Je suis actuellement en train d'implémenter l'algorithme EM (Expectation-Maximization) dans le but de segmenter une image en niveau de gris en deux classes.

Quand je teste mon algo sur une image comme celle des pièces de monnaie,
http://up.vbiran.ir/uploads/136213924137813_coins.png

il converge bien vers les deux gaussiennes.

Mon problème est que lorsque je bruite mon image, par exemple en faisant: im=imnoise(im,'gaussian',0.001, 0.005), comme vous pourrez le constater en lançant le code ci-dessous, mes gaussiennes sont instables et divergent au cours des itérations.

Je me suis inspiré de l'article: https://chrisjmccormick.wordpress.co...d-matlab-code/ pour coder l'algorithme.

Mon code est donné ci-dessous:

Code : Sélectionner tout - Visualiser dans une fenêtre à part
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
function [P_u, P_c, mu_u, mu_c, sigma2_u, sigma2_c] = ExpectationMaximizationAlgorithm(imDiff, alpha, NbIter, epsilon, flagDisp)
 
%
 
% [P_u, P_c, mu_u, mu_c, sigma2_u, sigma2_c] = ExpectationMaximizationAlgorithm(imDiff, alpha, NbIter, flagDisp)
 
% performs Expectation Maximization (EM) algorithm on input image imDiff to
 
% retrieve gaussians parameters
 
%
 
% IN:
 
%       - imDiff: grayscale difference image
 
%       - alpha: coefficient to set initial gaussian mixture model
 
%       - NbIter: number of iterations
 
%       - epsilon: difference (between two consecutive log-likelihood
 
%         computations) to reach to exit EM loop
 
%       - flagDisp: '0': no plot / '1': plot
 
%
 
% OUT:
 
%       - P_u/P_c: a priori probabilities
 
%       - mu_u (resp. mu_c): gaussians means
 
%       - sigma2_u (resp. sigma2_c): gaussians variances
 
%
 
% REFERENCE(S):
 
%       - "Automatic Analysis of the Difference Image for Unsupervised
 
%       Change Detection", Lorenzo Bruzzone, Diego Fernandez Prieto,
 
%       IEEE Transactions on Geoscience and Remote Sensing, VOL.38, NO.3
 
%       May 2000
 
%       - "Mixture densities, maximum likelihood and the EM algorithm",
 
%       A.P. Redner & H.F. Walker, SIAM Review, VOL.26, NO.2, pp. 195-239,
 
%       1984
 
%       - "Blobworld: Image Segmentation using Expectation-Maximization
 
%       and its Application to Image Querying", Chad Carson & Hayit
 
%       Greenspan, IEEE Transactions on Pattern Analysis and Machine
 
%       Intelligence, VOL.24, NO.8, August 2002
 
%
 
% AUTHOR: XXXXXXXXX
 
imDiff=double(imDiff);
 
[M,N]=size(imDiff);
 
if flagDisp
 
    figure
 
    imagesc(imDiff)
 
    title('imDiff')
 
    colormap gray
 
 
end
 
% Compute normalized image histogram
 
histo=imhist(imDiff/255,256);
 
distrib_mat=[[0:255].'  histo/(M*N)];
 
distrib_u=distrib_mat;
 
distrib_c=distrib_mat;
 
% Initialize values for P(t=0,wn), mu(t=0,wn), sigma^2(t=0,wn) by
 
% thresholding the previously computed histogram
 
Md=(max(max(imDiff))-min(min(imDiff)))/2;
 
Tu=floor(Md*(1-alpha));
 
Tc=floor(Md*(1+alpha));
 
% Means of 'changed' and 'unchanged' gaussian distributions
 
mu_c=sum(distrib_c(Tc:end,1).*distrib_c(Tc:end,2))/sum(distrib_c(Tc:end,2));
 
mu_u=sum(distrib_u(1:Tu,1).*distrib_u(1:Tu,2))/sum(distrib_u(1:Tu,2));
 
% Variances of 'changed' and 'unchanged' gaussian distributions
 
sigma2_c=sum(distrib_c(Tc:end,2).*((distrib_c(Tc:end,1)-mu_c).^2))/sum(distrib_c(Tc:end,2));
 
sigma2_u=sum(distrib_u(1:Tu,2).*((distrib_u(1:Tu,1)-mu_u).^2))/sum(distrib_u(1:Tu,2));
 
% Compute P(t+1,wn), mu(t+1,wn), sigma^2(t+1,wn) using
 
% Expectation-Maximization algorithm
 
% Reshape imDiff matrix to a M*N vector to avoid extra loop in the EM
 
% algorithm
 
imDiffLin=reshape(imDiff,1,M*N);
 
% Equal prior probabilities
 
P_u=1/2;
 
P_c=1/2;
 
fprintf(' Initial values: mu_c=%.4f, mu_u=%.4f, sigma2_c=%.4f, sigma2_u=%.4f, P_c=%.4f, P_u=%.4f \n',mu_c, mu_u, sigma2_c, sigma2_u, P_c, P_u);
 
loglike_previous=0;
 
 
 
for k=1:NbIter
 
     % Compute  p(X|w_u) and  p(X|w_c)
    pX_u=(sqrt(2*pi*sigma2_u))^(-1)*exp(-(imDiffLin-mu_u).^2/(2*sigma2_u));
 
    pX_c=(sqrt(2*pi*sigma2_c))^(-1)*exp(-(imDiffLin-mu_c).^2/(2*sigma2_c));   
 
 
    % p(X) is a Gaussian Mixture (GM) model
 
    pX=P_u*pX_u+P_c*pX_c;   
 
 
    % Compute  p(w_u|X) and  p(w_c|X) thanks to Bayes law
 
    pu_X=P_u*(pX_u./pX);
 
    pc_X=P_c*(pX_c./pX);
 
 
 
    % Compute new prior probabilities for all clusters
 
    P_u=(1/(M*N))*sum(pu_X);
 
    P_c=(1/(M*N))*sum(pc_X);
 
 
 
    % Compute new means
 
    mu_u=sum(pu_X.*imDiffLin)/sum(pu_X);
 
    mu_c=sum(pc_X.*imDiffLin)/sum(pc_X);
 
 
 
    % Compute new variances
 
    sigma2_u=sum(pu_X.*(imDiffLin-mu_u).*(imDiffLin-mu_u))/sum(pu_X);
 
    sigma2_c=sum(pc_X.*(imDiffLin-mu_c).*(imDiffLin-mu_c))/sum(pu_X);
 
 
    fprintf(' Iteration n°%.0f: mu_c=%.4f, mu_u=%.4f, sigma2_c=%.4f, sigma2_u=%.4f, P_c=%.4f, P_u=%.4f \n',k,mu_c, mu_u, sigma2_c, sigma2_u, P_c, P_u);
 
 
    % Just to plot
 
    pX_u_plot=(sqrt(2*pi*sigma2_u))^(-1)*exp(-(([0:255])-mu_u).^2/(2*sigma2_u));
 
    pX_c_plot=(sqrt(2*pi*sigma2_c))^(-1)*exp(-(([0:255])-mu_c).^2/(2*sigma2_c));
 
 
 
    % Check exiting condition by comparing log-likelihoods from previous
 
    % and current iterations
 
    loglike_current=sum(log(pX));   
 
 
    if abs(loglike_previous-loglike_current)<epsilon
 
        break;
 
    end    
 
    loglike_previous=loglike_current;   
 
 
    if flagDisp        
 
        figure(29)
 
        plot(P_u*pX_u_plot,'b')
 
        hold on
 
        plot(P_c*pX_c_plot,'r')
 
        hold on
 
        bar(distrib_mat(:,2),'g')
 
        hold off
 
        axis([0 255 0 0.06])
 
        pause(0.3)
 
    end
 
end
 
end
Si quelqu'un voit ce qui cloche, je suis preneur de toute remarque.

Bonne journée