The empirical Bayes version involves "independent" repetitions(a sequence) of the component decision problem. With the varying sample size possible, these are not identical components. However, we impose the usual assumption that the parameters sequenceθ= (θ_1, θ_2 , … ) consists of independent G-distributed parameters where G is unknown. We assume that G ∈?, a known family of distributions. The sample size N_1 and the decisin rule d_1 for component i of the sequence are determined in an evolutionary way. The sample size N_1 and the decision rule d_1 ∈ D_n1 used in the first component are fixed and chosen in advance. The sample size N_2 and the decision rule d_2 are functions of X^1 〓 (X_11,…,X_1Ni), the observations in the first component. In general, N_i is an integer-valued funtion of (X^1, _X^2, … ,X^i-1) and,, given N_1, d_i is a D_Ni-valued funtion of (X^1, … ,X^i-1). The action chosen in the i-th component is d_i(X^i) which hides the display of dependence on (X^1,…,X^i-1) We construct an empirical Bayes decision rule for estimating normal mean and show that it is asymptotically optimal.