Notes on understanding why Random Forests makes its decisions.
Understanding Random Forests
A good and visual explanation of how Random Forests works.
Model Feature Importances
Feature importances can be taken from Scikit-learn and Spark MLLib implementations after training.
However, this explains features as a whole based on the training dataset. i.e. We are still lacking visibility on an individual prediction.

PySpark & MLLib: Random Forest Feature Importances
I’m trying to extract the feature importances of a random forest object I have trained using PySpark. However, I do not see an example of doing this anywhere in the documentation, nor is it a metho...
Different methods of Explaining
A good overview of ways to explain a random forests model.

Explaining Feature Importance by example of a Random Forest
Learn the most popular methods of determining feature importance in Python
Visual explanation for each prediction
This library does the job.
https://pypi.org/project/treeinterpreter/