Quantcast
Channel: First steps - JuliaLang
Viewing all articles
Browse latest Browse all 2795

How to update learning rate during Flux training in a better manner?

$
0
0

Hi,

I am trying to update the training rate during training. I did it with custom training loop like below:

using Flux
using Flux: @epochs
using Flux: Flux.Data.DataLoader

M = 10
N = 15
O = 2

X = repeat(1.0:10.0, outer=(1, N)) #input
Y = repeat(1.0:2.0, outer=(1, N))  #output 

data =  DataLoader(X,Y, batchsize=5, shuffle=true)

dims = [M, O]
layers = [Dense(dims[i], dims[i+1]) for i in 1:length(dims)-1]
m = Chain(layers...)

L(x, y) = Flux.Losses.mse(m(x), y) #cost function


ps = Flux.params(m) #model parameters



opa = ADAM #optimizer
lr = 0.95 #initial learning rate
function my_custom_train!(loss, ps, data, opa, lr)
  
  local training_loss
  for (index, d) in enumerate(data)
    gs = gradient(ps) do
      training_loss = loss(d...)
      return training_loss
    end
 	@show training_loss
    opt = opa(lr/index) #updating learning rate during training iteration
    Flux.update!(opt, ps, gs)
    

  end
end

Is there any better way to do the same procedure??

5 posts - 3 participants

Read full topic


Viewing all articles
Browse latest Browse all 2795

Latest Images

Trending Articles



Latest Images