Training and inference phases of neural networks can be analysed individually.
For back propagation or training:
O(n^5) where n is the number of neurons, number of layers and number of iterations.
For forward propagation or inference:
O(n^4) where n is the number of neurons including the bias unit in a layer.
Assumption: There are the same number of neurons in each layer, and that the number of layers equal the number of neurons in each layer.
Reference: Computational Complexity Of Neural Networks
Given a single-layer RNN with 'a' rectified linear units and input of length 'b', a population prediction error of ε can be realized with at most O((a^4 * b)/ ε^2) samples
Let 'd' be the maximum depth of the tree and 'K' be total number of trees. For the exact greedy algorithm, complexity is O(K * d * x * log(n))
To find the optimal split at each node, you needed to re-sort the data on each column. This ends up incurring a time complexity at each layer that is very crudely approximated by O(x * log(n)). That is, 'x' nonzero entries for each feature and at each layer you're sorting lists, each of length at most 'n'.
Note: The log(n) factor can be reduced further by using binary search
References: XGBoost: A Scalable Tree Boosting System and XGBoost paper - time complexity analysis