Never. Like I said, normal human drivers* don't decide "hmm, I would rather hit that pedestrian than risk injury to myself!" They try to avoid the accident. So will AI's
Human drivers have built in hierarchies, priorities, that they use to make split second moral decisions - including the risks of self-sacrifice. The question asked is about the analogous factors that will be built into an AI.
At no point will the AI decide "hey, the driver is more valuable than the pedestrian, so I won't stop short and risk being rearended" or something similar.
Of course it will. It will have to have that capability, in order to choose between different probabilities of mishap.
For example, if the pedestrian is just stepping off of the shoulder of a freeway ramp, and a semi is a bit too close behind the AI on the curve, the AI will have to make (or have built into its priorities) a calculation involving the probabilities of bad consequences regardless of its behavior. And that calculation will have to include the severity of the different bad consequences - if the pedestrian is a deer, a dog, a human, a human who appears competent and alert, a human about to jump back, the calculation will change.
At some point, an AI will be either capable of swerving the car off the ramp entirely, with all of the risk to the driver that entails, or not. Either that, or freeway ramps will be redesigned to make the situation impossible.
As you pointed out, we are not dealing with certainties here. The AI will have to make choices via calculated odds. And that means calculated severity of consequences. The worry here is that at some point in the ongoing expense and frustration of getting an AI to handle this stuff, in the extended frustration of interests that want AI to work - very badly -, the focus will change to altering the landscape instead: building a world AI can handle. Cheap AI. Minimal AI.