Real-world facial recognition systems, especially in the security domain, often rely on only a single image per individual for identification in uncontrolled environments, where factors such as lighting, pose, expression, and occlusion vary significantly. This scenario, known as Single Sample Per Person (SSPP), still poses a significant challenge for current facial recognition methods. To address this problem, we propose the AD-VAE (Adversarial Disentangling Variational Autoencoder) framework, which combines Variational Autoencoder (VAE) techniques with Generative Adversarial Networks (GAN) to generate identity-preserving facial prototypes. The AD-VAE architecture is composed of four neural networks: encoder, decoder, generator, and a multitask discriminator. The proposed method was evaluated on widely used benchmark datasets, including AR, E-YaleB, CAS-PEAL, FERET, and LFW, achieving superior performance compared to state-of-the-art methods, with recognition rates ranging from 84.9% to 99.6%. The obtained results demonstrate the effectiveness and robustness of the approach, as well as its potential for practical applications and future research in the field.