October 13, 2020

By Kevin Balkoski

vegas_prob

While developing many of our machine learning solutions, Enview researchers often need to understand what our models are “thinking” when they make predictions on LiDAR datasets. One way we peer into the minds of our models is by calculating SHAP values to determine what factors are influencing predictions the most.

vegas_shap2

 

The images highlighted above come from a dataset of the Las Vegas Strip that has gone through a machine learning classifier that is tasked with identifying ground points. The first animation shows the model’s predicted probabilities for ground (pred_probs_cass2) while the second animation displays the SHAP values for the various model inputs. Areas where points are very red we can see that the input indicates a higher likelihood of it being ground. For example, if we look at the sphinx with the “mad_zenith_angle” input (a measure of the variance in local orientation), we can see the sphinx highlighted in blue indicating that the model knows that curvature reduces the probability that it’s ground.

When fine-tuning our models, getting a visual understanding of what our model is seeing using SHAP values can speed up development. Sometimes our human biases and expectations can be challenged when we take a deeper look with tools like SHAP.



AI

Related Articles

Enview Automatically Identifies New Buildings Near Pipeline - Enview

Enview Automatically Identifies New Buildings Near Pipeline - Enview

Read more

Significant Topographic Change Found Within Pipeline ROW - Enview

Significant Topographic Change Found Within Pipeline ROW - Enview

Read more

Construction of a Park Impacting Future Pipeline Operations - Enview

Construction of a Park Impacting Future Pipeline Operations - Enview

Read more