Currently, programs that copilot-verifier marks as correct are indicated by a lack of error messages being printed. This, however, doesn't give a sense to the user as to why a program is correct. We should investigate what it would take to equip copilot-verifier with the ability to produce human-readable explanations of what the verifier did in the course of proving a program is correct.
In very broad strokes, I imagine this would require taking ahold of the ProvedGoals that Crux returns after simulation and analyzing them to determine what explanations to produce. There are likely some finer details to work around, however.