Differential Privacy only proves that it cannot leak a certain amount of information about individual samples of the training set. This only guarantees the input is not leaked exactly back, any composition of the training set is valid, although in image generation this usually means a very distorted image.
An example of DP in image generation (using GANs): https://par.nsf.gov/servlets/purl/10283631