Predictive process monitoring (PPM) techniques have become a key element in both public and private organizations by enabling crucial operational support of their business processes. Thanks to the availability of large amounts of data, different solutions based on machine and deep learning have been proposed in the literature for the monitoring of process instances. These state-of-the-art approaches leverage accuracy as main objective of the predictive modeling, while they often neglect the interpretability of the model. Recent studies have addressed the problem of interpretability of predictive models leading to the emerging area of Explainable AI (XAI). In an attempt to bring XAI in PPM, in this paper we propose a fully interpretable model for outcome prediction. The proposed method is based on a set of fuzzy rules acquired from event data via the training of a neuro-fuzzy network. This solution provides a good trade-off between accuracy and interpretability of the predictive model. Experimental results on different benchmark event logs are encouraging and motivate the importance to develop explainable models for predictive process analytics.