Political Optimizer (PO) is a recently proposed human-behavior inspired meta-heuristic, which has shown tremendous performance on complex multimodal functions as well as engineering optimization problems. Good convergence speed and well-balanced exploratory and exploitative behavior of PO convince us to employ PO for the training of Feedforward Neural Network (FNN). The FNN-training problem is formulated as an optimization problem in which the objective is to minimize the Mean Squared Error (MSE) or Cross-Entropy (CE). The weights and biases of the FNN are arranged in the form of a vector called a candidate solution. The performance of the proposed trainer is evaluated on 5 classification data-sets and 5 function-approximation data-sets, which have already been used in the literature. In recent years, Grey Wolf Optimizer (GWO), Moth Flame Optimization (MFO), Multi-Verse Optimizer (MVO), Sine-Cosine Algorithm (SCA), Whale Optimization Algorithm (WOA), Ant Lion Optimizer (ALO), and Salp Swarm Algorithm (SSA) have successfully been applied on neural network training. In this paper, we compare the performance of PO with these algorithms and show that PO either outperforms them or performs equivalently. The MSE, CE, training set accuracy, and test set accuracy are used as metrics for the comparative analysis. The non-parametric Wilcoxon’s rank-sum test is used to show the statistical significance of the results. Based on the tremendous performance, we highly recommend using PO for the training of artificial neural networks to solve the classification and regression problems.