In this paper, we propose a generic learning method for training conditional generative adversarial networks on audio data. This makes it possible to apply the same generic approach as described in this study to problems that previously required completely different loss formulations when learning audio data. This method can be useful for labeling noises with a certain number of identical frequencies, generating speech labels corresponding to each frequency, and generating audio data for noise cancellation. To achieve this, we propose a sound restoration process based on U-Net, called Sound U-net. In this study, we realized a wide applicability of our system, owing to its ease of implementation without a parameter adjustment, as well as a reduction in the training time for audio data. During the experiment, reasonable results were obtained without manually adjusting the loss function.