Hi Burhan,

Thanks for your reply.

Yes, it will take a long time if you want to get codebooks of different sizes. I am really sorry for causing the inconvenience to you. To shorten the execution time (which of course also heavily depends on the computation platform you have), for instance, I was simply opening multiple sessions (i.e., whatever program you use to run the python scripts) to accelerate the simulation by that time (i.e., different processes execute the learning of different codebook sizes). Another way to make the time shorter could be by changing `options`

and `train_opt`

which basically control the number of iterations of the simulation. Of course, if you make these numbers very small then you will expect a certain level of degradation in the final performance, which is always the tradeoff in these stochastic algorithms. Or similarly, if you observe that the achieved gain is already good enough, then you can just stop the program without waiting till the end. Regarding the calculation of the performance, kindly find the following very easy example code:

M = 64; % M: number of antenna
N = 16; % N: codebook size
U = 100; % U: number of users
Theta = randn(M, N); % The learned phases
W = (1/sqrt(M))*exp(1j*Theta); % Beam codebook (output of the simulation)
H = randn(M, U) + 1j * randn(M, U); % User channel dataset
average_gain = mean(max(abs(W'*H).^2)); % Eq. (8) in the paper

You simply need to replace `W`

with either your learned codebook or DFT codebook or EGC codebook (just to make sure that it has a shape of the number of antennas by the number of beams). And replace `H`

with your user channel dataset (similarly, make sure that it has a shape of the number of antennas by the number of users). As I mentioned last time, Eq. (8) in the paper is exactly what I used to calculate such average beamforming gain performance if you want to know more details.

Good luck,

Yu