The adaptability of machine learning methods can be exploited by an adversary to cause disfunction of machine learning; a process known as Adversarial Learning. Our aim is to test the hypothesis that an ensemble of neural networks trained on the same data manipulated by an adversary would be more robust than a single network. We [Wang, Shafi, Lokan & Abbass] investigate two attack types: targeted and random. We use Mahalanobis distance and covariance matrices to selected targeted attacks. The experiments use both artificial and real-world datasets. The results demonstrate that an ensemble of neural networks trained on attacked data is more robust against the attack than a single network. The significance of the current work lies in the fact that targeted attacks are not white noise, but deliberate planned series of actions.