
While developing many of our machine learning solutions, Enview researchers often need to understand what our models are “thinking” when they make predictions on LiDAR datasets. One way we peer into the minds of our models is by calculating SHAP values to determine what factors are influencing predictions the most.

The images highlighted above come from a dataset of the Las Vegas Strip that has gone through a machine learning classifier that is tasked with identifying ground points. The first animation shows the model’s predicted probabilities for ground (pred_probs_cass2) while the second animation displays the SHAP values for the various model inputs. Areas where points are very red we can see that the input indicates a higher likelihood of it being ground. For example, if we look at the sphinx with the “mad_zenith_angle” input (a measure of the variance in local orientation), we can see the sphinx highlighted in blue indicating that the model knows that curvature reduces the probability that it’s ground.
When fine-tuning our models, getting a visual understanding of what our model is seeing using SHAP values can speed up development. Sometimes our human biases and expectations can be challenged when we take a deeper look with tools like SHAP.