Gradually decay the weight of loss function

為{幸葍}努か 提交于 2019-12-22 10:53:52

问题


I am not sure is the right place to ask this question, feel free to tell me if I need to remove the post.

I am quite new in pyTorch and currently working with CycleGAN (pyTorch implementation) as a part of my project and I understand most of the implementation of cycleGAN.

I read the paper with the name ‘CycleGAN with better Cycles’ and I am trying to apply the modification which mentioned in the paper. One of modification is Cycle consistency weight decay which I don’t know how to apply.

optimizer_G.zero_grad()

# Identity loss
loss_id_A = criterion_identity(G_BA(real_A), real_A)
loss_id_B = criterion_identity(G_AB(real_B), real_B)

loss_identity = (loss_id_A + loss_id_B) / 2

# GAN loss
fake_B = G_AB(real_A)
loss_GAN_AB = criterion_GAN(D_B(fake_B), valid)
fake_A = G_BA(real_B)
loss_GAN_BA = criterion_GAN(D_A(fake_A), valid)

loss_GAN = (loss_GAN_AB + loss_GAN_BA) / 2

# Cycle consistency loss
recov_A = G_BA(fake_B)
loss_cycle_A = criterion_cycle(recov_A, real_A)
recov_B = G_AB(fake_A)
loss_cycle_B = criterion_cycle(recov_B, real_B)

loss_cycle = (loss_cycle_A + loss_cycle_B) / 2

# Total loss
loss_G =    loss_GAN + 
            lambda_cyc * loss_cycle + #lambda_cyc is 10
            lambda_id * loss_identity #lambda_id is 0.5 * lambda_cyc

loss_G.backward()
optimizer_G.step()

My question is how can I gradually decay the weight of cycle consistency loss?

Any help in implementing this modification would be appreciated.

This is from the paper: Cycle consistency loss helps to stabilize training a lot in early stages but becomes an obstacle towards realistic images in later stages. We propose to gradually decay the weight of cycle consistency loss λ as training progress. However, we should still make sure that λ is not decayed to 0 so that generators won’t become unconstrained and go completely wild.

Thanks in advance.


回答1:


Below is a prototype function you can use!

def loss (other params, decay params, initial_lambda, steps):
    # compute loss
    # compute cyclic loss
    # function that computes lambda given the steps
    cur_lambda  = compute_lambda(step, decay_params, initial_lamdba) 

    final_loss = loss + cur_lambda*cyclic_loss 
    return final_loss

compute_lambda function for linearly decaying from 10 to 1e-5 in 50 steps

def compute_lambda(step, decay_params):
    final_lambda = decay_params["final"]
    initial_lambda = decay_params["initial"]
    total_step = decay_params["total_step"]
    start_step = decay_params["start_step"]

    if (step < start_step+total_step and step>start_step):
        return initial_lambda + (step-start_step)*(final_lambda-initial_lambda)/total_step
    elif (step < start_step):
        return initial_lambda 
    else:
        return final_lambda
# Usage:
compute_lambda(i, {"final": 1e-5, "initial":10, "total_step":50, "start_step" : 50})    


来源:https://stackoverflow.com/questions/54047725/gradually-decay-the-weight-of-loss-function

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!