LLM alignment trains models to follow our instructions and behave ethically. We don’t want models to give biased, toxic or ...