High dynamic range (HDR) image generation is a useful technology in various applications. An easy way to obtain an HDR image is the multiple exposures fusion technique which fuses a set of sequential exposures. However it suffers from the ghosting artifacts for scenes with significant motion objects, which is the key challenge in HDR imaging. Detecting and removing ghosting artifacts is a crucial issue for automatically generating HDR images of dynamic scenes. Although previous methods align the low dynamic range (LDR) images or detect motion regions to remove the ghosting artifacts in the final HDR, they still cannot generate a well-pleasing HDR image. In this paper, we propose a novel deep neural network with learning generator constraints called GHDRNet that blends information from all the exposures. Different from previous methods which only use ground truth to learn parameters of the network for HDR image reconstruction, our method is based on a novel successive network which not only estimates HDR results from dynamic scenes but also restores the static LDR images from the estimated HDR image. This special designed network restrains the estimated HDR image by restoring the corresponding LDR images. To capture deep hierarchical features and increase the receptive field for recovering the abundant details, we also propose an enhancement block (EBlock) that is a topology structure to aggregate several scale features. Furthermore, considering the cumbersome networks have a redundant parameter, we introduce a light-weight residual module to EBlock which effectively reduces the network parameters. Extensive evaluations show the advantages of our method over related state-of-the-art methods on three benchmarks.