This thesis aims to investigate how cartoon and comic images can be better explored in the context of converting their artistic styles. Specifically, we seek to study the translation of anime illustrations into their manga representations, given a manga book as a reference. Although state-of-the-art image-to-image translation models can convert images between different domains, methods for translating il lustrations to this style are scarce. We propose to exploit the unique characteristics of anime and manga images, to produce a preliminary output, which supports a two-stage translation process. We believe this approach can reduce model complexity while generating high-fidelity outputs. Furthermore, we aim to impose minimal restrictions on the target domain, making the translation unsupervised. Finally, the proposed framework’s output can be used to produce rich datasets composed of colored and synthetic manga images, which would support colorization methods that rely on large amounts of paired training data.