Advances in Error Correction and Distributionally Robust Optimization

The field is witnessing significant developments in error correction and distributionally robust optimization. Researchers are exploring innovative methods to improve soft output for various decoding algorithms, such as GRAND and SCL, by leveraging code structure and linear codebook constraints. Additionally, there is a growing interest in distributionally robust optimization, with studies proposing novel algorithms and frameworks to handle uncertainty and adversarial perturbations in various contexts, including control systems and online decision-making. Noteworthy papers include:

  • NeuroSep-CP-LCB, which integrates neural networks with contextual bandits and conformal prediction for early sepsis detection, and
  • DR-PETS, which extends the PETS algorithm to certify robustness against adversarial perturbations in control systems. These advancements have the potential to significantly impact patient outcomes, control system reliability, and decision-making under uncertainty.

Sources

Leveraging Code Structure to Improve Soft Output for GRAND, GCD, OSD, and SCL

NeuroSep-CP-LCB: A Deep Learning-based Contextual Multi-armed Bandit Algorithm with Uncertainty Quantification for Early Sepsis Prediction

On the Minimax Regret of Sequential Probability Assignment via Square-Root Entropy

Umlaut information

A Balanced Tree Transformation to Reduce GRAND Queries

Wasserstein Distributionally Robust Bayesian Optimization with Continuous Context

DR-PETS: Learning-Based Control With Planning in Adversarial Environments

Data-driven Distributionally Robust Control Based on Sinkhorn Ambiguity Sets

Reinforcement Learning for Efficient Toxicity Detection in Competitive Online Video Games

Built with on top of