Couple of very attractive/interesting papers about equation solving in noisy conditions (Gaussian channels).
(A + n1) x = b + n2,
here n1 is parameter estimation error (noise matrix); n2 is observation error (noise vector) -- both Gaussian distributed. But what’s ever doing using THEORY OF INFORMATION (Shannon) and MACHINE LEARNING – finally we obtain LINEAR ALGEBRA and THEORY OF MEASUREMENTs !
The second surprise has come from this paper: [ref. 2].
Seems one of the most important problems in mathematical society is GIBBS SAMPLING, which is private/particular case of Metropolis–Hastings algorithm for random restarts by one of dimensions.
Interesting your opinion, if you have time to type several sentences about.
- Dongning Guo, S. Shamai and S. Verdu, "Mutual information and minimum mean-square error in Gaussian channels," in IEEE Transactions on Information Theory, vol. 51, no. 4, pp. 1261-1282, April 2005. doi: 10.1109/TIT.2005.844072
- J. Choi, "An MCMC–MIMO Detector as a Stochastic Linear System Solver Using Successive Overrelexation," in IEEE Transactions on Wireless Communications, vol. 15, no. 2, pp. 1445-1455, Feb. 2016. doi: 10.1109/TWC.2015.2490071
(A + n1) x = b + n2,
here n1 is parameter estimation error (noise matrix); n2 is observation error (noise vector) -- both Gaussian distributed. But what’s ever doing using THEORY OF INFORMATION (Shannon) and MACHINE LEARNING – finally we obtain LINEAR ALGEBRA and THEORY OF MEASUREMENTs !
The second surprise has come from this paper: [ref. 2].
Seems one of the most important problems in mathematical society is GIBBS SAMPLING, which is private/particular case of Metropolis–Hastings algorithm for random restarts by one of dimensions.
Interesting your opinion, if you have time to type several sentences about.