claim
In the context of offline reinforcement learning with human feedback (RLHF), an ε-fraction of trajectory pairs in a dataset can be corrupted, representing either adversarial attacks or noisy human preferences.

Authors

Sources

Referenced by nodes (1)