Modern vehicles are complex Cyber-Physical Systems (CPS) that operate across diverse environments, where the failure of critical subsystems such as the engine, transmission, brakes, and fuel system can lead to unplanned downtime and significant maintenance costs. To mitigate these challenges, continuous health monitoring and early detection of abnormal sensor behavior are essential. Data-driven Digital Twin (DT) systems offer a promising solution to this problem by leveraging sensor data to model and predict the state of vehicle subsystems. However, existing DT solutions primarily rely on Deep Learning (DL) techniques, which often function as black-box models, providing little to no insight into their decision-making processes. This lack of explainability hinders trust and informed decision-making among automotive stakeholders. To address this challenge, this paper presents TwinExplainer, a novel three-layered architectural pipeline designed to explain the predictions made by deep learning algorithms in data-driven DT. TwinExplainer provides visual explanations through graphs and complements them with linguistic explanations generated by Large Language Models (LLM), offering stakeholders intuitive insights into feature contributions and model behavior at both local and global levels.