In this paper we introduce an acceleration of gradient descent algorithm with backtracking. The idea is to modify the steplength tk by means of a positive parameter k , in a multiplicative manner, in such a way to improve the behaviour of the classical gradient algorithm. It is shown that the resulting algorithm remains linear convergent, but the reduction in function value is significantly improved.
gradient descent methods, backtracking, acceleration methods