This paper presents a Generative Adversarial Network (GAN) based approach for sketch-to-image translation. The model translates hand-drawn sketches into realistic images. We employ a convolutional neural network architecture for both the generator and discriminator models. The loss function combines a pixel-wise mean absolute error (MAE) with a contextual loss based on (KL) divergence between grayscale intensity distributions. This combination encourages the generated images to resemble the real photos not only in terms of pixel values but also in capturing the overall distribution of light and shadow patterns. The model is trained on a dataset of paired sketches and photographs. We evaluate the performance using L2 distance and Structural Similarity Index Measure (SSIM) to assess the quality of the generated images. Our experimental results demonstrate that the proposed model significantly improves image realism and structural similarity compared to existing techniques, achieving an SSIM score of %78.58 and outperforming previous approaches in sketch-to-image translation.
Othman, S., Mansour, Y., & Hassan, A. (2025). Forensic Image Synthesis for Criminal Investigation: Bridging the Gap Between Witness Descriptions and Reality. Journal of the ACS Advances in Computer Science, 16(1), -. doi: 10.21608/asc.2025.383602.1036
MLA
Shaimaa Othman; Yosr Mansour; Alaa Hassan. "Forensic Image Synthesis for Criminal Investigation: Bridging the Gap Between Witness Descriptions and Reality", Journal of the ACS Advances in Computer Science, 16, 1, 2025, -. doi: 10.21608/asc.2025.383602.1036
HARVARD
Othman, S., Mansour, Y., Hassan, A. (2025). 'Forensic Image Synthesis for Criminal Investigation: Bridging the Gap Between Witness Descriptions and Reality', Journal of the ACS Advances in Computer Science, 16(1), pp. -. doi: 10.21608/asc.2025.383602.1036
VANCOUVER
Othman, S., Mansour, Y., Hassan, A. Forensic Image Synthesis for Criminal Investigation: Bridging the Gap Between Witness Descriptions and Reality. Journal of the ACS Advances in Computer Science, 2025; 16(1): -. doi: 10.21608/asc.2025.383602.1036