OpenAI’s Alignment research focuses on training AI systems to be helpful, truthful, and safe. Our team is exploring and developing methods to learn from human feedback.
Our long-term goal is to achieve scalable solutions that will align far more capable AI systems of the future — a critical part of our mission.