This informal talk will present ongoing research on improved inference for MLP systems based on the statistical interpretation of deep neural networks (DNNs). Under this interpretation, the conventional MLP forward pass can be seen as a coarse approximation of inference on an statistical model through marginalization of the hidden nodes values. This opens the door for improved inference when using neural networks by better approximating this marginalization. In particular the same approximations used for uncertainty propagation can be here used but without the need for uncertainty estimation. These algorithms therefore work with conventional features and only require minimal modifications of the conventional MLP computation. Current experiments show sensible improvements of performance both for clean and distorted environments but limited to some feature types. This talk also aims therefore at gathering feedback from other researchers working with AUDIMUS to address possible causes for the current algorithm behavior. Since it is ongoing research the talk will be limited to L2F and VI members.