Pokemon Color Transfer
Introduction
Color transfer is a technique that modifies the color style of an image by borrowing or reassociating colors from another source while preserving the original spatial structure and content. By separating “what the image is” from “how the image is colored,” color transfer enables creative style manipulation without altering shapes, outlines, or semantic meaning. It has been widely used in artistic stylization, recoloring, and visual design tasks due to its ability to generate visually coherent outputs from simple palette or distribution transformations.
In the context of Pokemon artwork, color transfer offers a powerful and intuitive way to design new visual styles. Pokemon are typically drawn using a small number of distinctive colors that define their character identity (e.g., Charmander’s warm oranges or Squirtle’s blues). By transferring color styles between Pokemon, we can create novel, aesthetically pleasing variations that feel natural and consistent with the Pokemon universe, while still preserving the original line art and recognizable features. In this project, we focus on two primary goals:
a. Aesthetic quality — the recolored Pokemon should look visually pleasing and harmonious.
b. Content preservation — the recolored image should remain structurally similar to the original, with no distortions to shapes, edges, or important details.
To achieve these goals, we apply palette-based color transfer methods to generate new Pokemon styles and systematically evaluate the results. Aesthetic quality is assessed through a human-preference survey questionnaire, while similarity to the original image is measured using quantitative metrics such as FID, SSIM, and other perceptual distance measures. Together, these evaluations allow us to study both the creativity and fidelity of color-transferred Pokemon, providing a balanced assessment of stylistic expressiveness and structural preservation.
Background
Color and Style Transfer
Palette Extraction
Palette extraction is the process of analyzing an image and summarizing its millions of pixel colors into a small, representative set of key colors — known as a color palette. Instead of working directly with every pixel’s RGB value, palette extraction identifies the dominant or most perceptually important colors that define the visual appearance of the image. A palette typically contains only 4–8 colors, yet these colors capture the essential chromatic structure of the image. This compact representation removes noise, eliminates redundant colors, and preserves the underlying style of the original artwork. We implement two palette extraction methods.
K-Means
We use K-Means to cluster pixel values and use its center as palette. To avoid clustering on millions of pixels, we quantize RGB space into 16 bins per channel. For each pixel with RGB value :
For each non-empty bin, we count pixels and compute average LAB value of all pixels in that bin. To avoid randomness in K-Means, the method uses a weighted farthest-point initialization. We select the bin with the largest weight as the first center. For each new center, compute squared distance from existing centers and apply attenuation:
We then pick the bin with the largest attenuated weight. Each histogram bin with LAB color and weight is assigned to the nearest center. The center update rule is:
Black and white anchors remain fixed. The convergence criterion is:
Blind Separation Palette Extraction (BSS-LLE Method)
This second method treats palette extraction as a blind unmixing problem with spatial smoothness constraints. It is computationally more expensive but yields globally coherent palettes.
Each pixel forms a 5-D feature vector:
where: ∈ , are normalized coordinates, controls spatial smoothness. For each pixel, we find nearest neighbors and compute LLE weights by solving:
Construct Laplacian:
We assume each pixel’s color can be expressed as a mixture of palette colors:
Where are mixture weights, and are palette colors. We minimize:
The optimization uses alternating minimization:
1. Update W (closed-form linear system) 2. Hard-threshold W to enforce sparsity 3. Update C by solving
4. Increase β to gradually enforce sparsity (continuation method)
The learned palette may lie off-manifold. Thus each palette color is replaced by the mean of its nearest real RGB pixels. This ensures interpretability and consistent color reproduction.
Final Output
Both methods return a palette:
Methods
In this section, we describe the methods we use to transfer color between pokemons.
Baseline: Palette Based Random Transfer
Palette-based random transfer works by first extracting a compact color palette from each image, then randomly matching colors between the two palettes to generate a playful and diverse recoloring. Instead of enforcing a strict one-to-one correspondence or optimizing for perceptual similarity, the method randomly permutes or samples palette colors and maps all pixels associated with a source palette color to a randomly chosen target palette color. This allows the transferred image to preserve structural details while producing vivid, surprising, and stylistically varied recolorings. Because it operates only on palette colors rather than individual pixels, palette-based random transfer is fast, interpretable, and ideal for generating creative variations in tasks like Pokemon color stylization. We use Palette-based random transfer as the baseline.
Neighbor Segments
Neighbor Segments (NS) Method groups pixels into local, perceptually coherent regions and uses neighborhood relationships to guide smooth and consistent recoloring. Instead of treating each pixel independently, the image is first segmented into small regions (superpixels or color-coherent clusters), where each segment represents a set of spatially adjacent pixels with similar color statistics. Let the image be segmented into , where each segment contains pixels with similar color features. These segments form a neighborhood graph , where an undirected edge indicates that the two segments touch in the spatial domain. The adjacency matrix of the graph satisfies:
During color transfer, each segment is assigned one palette color. Let be the target palette, and let denote the transferred color assigned to segment . The NS method encourages smoothness by minimizing a neighborhood-consistency energy:
where is a weight encoding the similarity of the segments (e.g., based on LAB difference or boundary strength). This term penalizes large color differences between adjacent segments, preventing abrupt color transitions or blocky artifacts. At the same time, the transferred color for each segment should remain close to its mapped palette color determined by the palette mapping rule:
The final transferred colors are obtained by minimizing the combined objective:
where controls the strength of neighborhood smoothing. This neighborhood-aware propagation allows the algorithm to maintain structural consistency, preserve texture boundaries, and generate recolorings that are both stable and visually coherent. Overall, Neighbor Segments provides a lightweight way to incorporate spatial smoothness into palette-based transfer, producing natural transitions while keeping computation efficient.
Palette-aware cluster transfer
Palette-aware cluster transfer aligns the two images’ palettes first, then propagates that mapping to all pixels using soft, per-pixel weights. Let the two extracted palettes be and in CIE–Lab. We compute a one-to-one correspondence via the Hungarian algorithm by minimizing total Lab distance: This yields a permutation that best matches palette colors across images.
For each pixel in image A, we compute soft memberships to A’s palette using a temperatured softmax in Lab: For each matched pair, define the palette-to-palette shift in HSV: with hue wrapped to (in normalized units). The pixel’s HSV is updated by a weighted blend of these shifts, followed by clipping to valid ranges and conversion back to RGB. The same procedure recolors image B by swapping the roles of and .
Cluster Transfer without Palette
Cluster Transfer with no palette discovers palette-like color regions automatically by clustering each image in joint color–space and then transfers colors by matching those clusters across the two images.
Cluster discovery: For each image, we build Lab+XY features for every pixel, where is the pixel coordinate, are image dimensions, and (the xy_scale) balances spatial locality versus color. K-means on these features yields clusters . For each cluster, we store:
mean chroma ,
chroma covariance ,
spatial centroid , and area .
Cluster matching: Let superscripts and denote the two images. We define a matching cost between cluster in A and cluster in B:
where weight color, spatial proximity, and relative area, and are pixel counts. A one-to-one correspondence is obtained by minimizing the total cost with the Hungarian algorithm.
Per-pixel soft assignment. Each pixel in image A gets soft memberships to A’s clusters using a temperatured softmax in chroma (ab) space: where (soft_tau) controls how sharply pixels commit to a single cluster (larger → crisper; smaller → smoother blends).
Cluster-pair color mapping: For each matched pair we define either:a mean-shift in chroma: ,
or an optional affine mapping in chroma (when use_affine = True):
Pixel update and reconstruction: For a pixel in A with chroma vector , we blend the cluster-pair mappings:
mean-shift mode:
affine mode: We preserve lightness () to keep shading/speculars intact and form , then convert to RGB with clipping. The same symmetric procedure recolors image B using clusters discovered in B and matched to A.
Convex-hull Transfer
Convex-hull transfer treats each image’s palette as a convex set in color space. It transfers colors by preserving a pixel’s barycentric coordinates with respect to its source palette while mapping those coordinates onto the target palette.
Palette alignment: Given palettes and (in RGB or Lab), we first compute a one-to-one correspondence with a Hungarian assignment that minimizes total Lab distance: This permutes to best match perceptually.
Inside-hull mapping (barycentric coordinates): Let be a pixel color (as a 3-vector) in image A. If lies in the convex hull of , we find a simplex (e.g., Delaunay triangle/tetra in RGB) with vertices (typically in 2D projections or in RGB). We compute barycentric weights such that: We then map to the target palette by reusing the same weights on the matched vertices:
Outside-hull fallback (local simplex projection): If lies outside , we select the nearest palette colors (e.g., 3–5 neighbors) and solve a constrained least squares to obtain a convex combination that best approximates : The solution (via simplex projection) yields weights that we again transfer to the matched targets:
Reconstruction and safeguards: To preserve shading and highlights, we keep the lightness channel and only replace chroma: We apply practical guards for stability:
- Background/line protection: skip updates for near-white/near-black or very low-chroma pixels.
- Chroma clamp: cap to avoid oversaturation.
- Gentle blend: mix the recolored result with the original (e.g., 80–90%) to suppress artifacts while retaining the target palette’s look.
Neighbor Segments with Superpixel (NS-S)
Neighbor Segments with Superpixels (NS-S) method extends palette-based color transfer by incorporating spatially coherent superpixel regions. Instead of operating on individual pixels, the image is first partitioned into a set of superpixels, each representing a compact region of adjacent pixels with similar color and texture characteristics. Let the superpixel segmentation produce , where each superpixel is treated as a unit for color assignment. A neighborhood graph is constructed, where an edge indicates that two superpixels touch or share a boundary in the image plane. The adjacency matrix is defined as:
Given a target palette , each superpixel receives a transferred color , determined by a palette-matching function (e.g., hard assignment, soft matching, or nearest palette color).
To ensure smooth and visually coherent recoloring across the image, the NS-SP method minimizes the following energy:
where:
- encourages each superpixel to adopt its intended palette color,
- enforces spatial smoothness between adjacent superpixels,
- weights the similarity between boundary segments (e.g., based on color difference, gradient magnitude, or LAB distance),
- controls the strength of neighborhood smoothing.
By working at the superpixel level, NS-SP effectively reduces noise, prevents pixel-level flicker, and maintains clean boundaries between meaningful regions. The neighborhood smoothing further prevents abrupt color jumps, producing recolored images that maintain Pokémon shapes, shading, and visual consistency while still allowing strong palette-driven stylistic changes.
Overall, the NS-SP formulation provides a computationally efficient and perceptually stable approach for palette-based color transfer, balancing stylization with structural fidelity.
Results
Evaluation Metrics
We quantitatively evaluate our color-transfer methods by comparing the original Pokémon image and the recolored result using several complementary metrics.
Fréchet Inception Distance (FID)
Color Histogram Similarity (Correlation / Chi-Square)
D-CIELAB (ΔEab for Normal and Color-Deficient Observers)
D-CIELAB measures pixel-wise color differences using CIELAB (Lab) color space and extends them to different types of observers, including those with simulated color-vision deficiencies. For each pair of corresponding pixels in the original and recolored images, we compute the Euclidean distance ΔEab in Lab space, then average over all pixels.
We report four average ΔE values:
- ΔEab (trichromat) – Mean per-pixel color difference for a normal color-vision observer.
- ΔEab (protan) – Mean difference as seen by a protan (L-cone-deficient) observer.
- ΔEab (deutan) – Mean difference as seen by a deutan (M-cone-deficient) observer.
- ΔEab (tritan) – Mean difference as seen by a tritan (S-cone-deficient) observer.
In all cases, lower values indicate more similar colors. Differences around 1 in ΔEab are often near the threshold of human just-noticeable color differences, while larger numbers indicate more visible shifts. This metric allows us to assess how well the color transfer preserves appearance for both normal and color-blind viewers.
CIE94 Color Difference
CIE94 is a perceptual color-difference formula that improves on raw ΔEab by weighting lightness, chroma, and hue differences according to human sensitivity. We convert both images to Lab space and, for each pixel, decompose the color difference into:
- ΔL – difference in lightness,
- ΔC – difference in chroma (saturation),
- ΔH – difference in hue,
and then combine these components with empirically derived weights to obtain a single ΔE94 value per pixel. We average ΔE94 across all pixels to get the final score.
- 0 indicates identical colors.
- Lower mean ΔE94 values correspond to more perceptually similar recolorings.
Compared to plain ΔEab, CIE94 is more aligned with human judgments in many regions of color space.
CIEDE2000 Color Difference
CIEDE2000 (ΔE00) is a more recent and widely adopted color-difference formula that further refines perceptual uniformity compared to CIE94. It introduces additional corrections for:
- Nonlinearities in chroma and hue perception,
- Interactions between chroma and hue differences,
- Local variations in perceptual sensitivity across Lab space.
As with CIE94, we compute ΔE00 for each pixel between the original and recolored images and average over all pixels.
- 0 indicates identical colors.
- Lower mean ΔE00 values indicate recolorings that are closer to the original in terms of human-perceived color.
CIEDE2000 calibrates against large psychophysical datasets and corrects known non-uniformities in CIELAB and CIE94/
Structural Similarity Index (SSIM)
Structural Similarity Index (SSIM) measures perceptual image quality by comparing local patterns of luminance, contrast, and structure rather than raw pixel differences. For each color channel, we compute SSIM over local windows using Gaussian-weighted means and variances, then average across channels and spatial locations.
- The SSIM score typically ranges from 0 to 1 for natural images.
- A value of 1 means the images are structurally identical.
- Higher values indicate better preservation of local structure, contrast, and luminance.
In our context, SSIM tells us whether the color transfer preserves the underlying Pokémon shapes, edges, and textures, even when colors change.
VGG Feature-Space Distance
The VGG feature-space distance measures how different two images are in the high-level feature space of a pretrained VGG16 network. We feed both the original and recolored images into VGG16 and extract intermediate convolutional feature maps. Each spatial location in a feature map is treated as a point in a high-dimensional feature space.
We then compute the symmetric Hausdorff distance between the two sets of feature vectors:
- For each feature point in the original image, find the closest feature point in the recolored image, and record the worst (largest) such distance.
- Repeat in the opposite direction (recolored to original).
- Take the maximum of these two worst-case distances as the final metric.
Lower VGG distances mean that, at every location, the network can find a similar feature in the other image, indicating similar high-level structure and texture. Higher values indicate more substantial semantic or structural changes beyond simple color shifts.
Quantitative and Qualitative Results
| Metric | Palette Dif | Clustering | Clustering-NP | Convex Hall | NS | NS-S |
|---|---|---|---|---|---|---|
| FID | 689.24/564.47 | 106.22/110.64 | 54.29/61.61 | 96.12/66.19 | 107.60/108.85 | 75.15/42.77 |
| Histogram-Corr | 0.91/0.96 | 0.91/0.96 | 0.00/0.96 | 0.87/0.96 | 0.91/0.96 | 0.90/0.96 |
| Histogram-Chi2 | 2.76/1.78 | 8.97/2.35 | 930.03/2.62 | 23.28/2.02 | 8.35/2.39 | 6.38/9.97 |
| CIELAB | 19.86/16.00 | 18.32/14.88 | 14.22/11.20 | 13.03/8.76 | 18.33/14.63 | 16.88/15.51 |
| CIE94 | 15.19/12.44 | 13.44/11.13 | 9.72/8.52 | 8.78/6.71 | 13.45/10.98 | 12.20/11.76 |
| CIEDE2000 | 14.33/12.07 | 13.10/10.97 | 10.34/9.17 | 8.77/7.14 | 13.11/10.81 | 11.83/11.20 |
| SSIM | 0.81/0.87 | 0.96/0.95 | 0.99/0.99 | 0.98/0.99 | 0.96/0.96 | 0.97/0.95 |
| VGG Latent Space Distance | 36.80/36.46 | 15.46/21.98 | 16.40/22.27 | 13.73/18.18 | 15.76/22.05 | 17.29/18.31 |

| Metric | Palette Dif | Clustering | Clustering-NP | Convex Hall | NS | NS-S |
|---|---|---|---|---|---|---|
| FID | 604.47/428.24 | 506.28/846.57 | 191.41/652.94 | 22.94/305.28 | 476.33/421.44 | 198.56/327.57 |
| Histogram-Corr | 0.98/0.98 | 0.98/0.98 | -0.00/0.98 | 0.99/0.98 | 0.98/0.98 | 0.98/0.99 |
| Histogram-Chi2 | 0.72/53.33 | 0.73/52.58 | 4.69/108.98 | 18.49/40.69 | 0.74/51.02 | 1.00/38.11 |
| CIELAB | 23.98/17.02 | 16.46/16.59 | 13.57/18.01 | 1.73/13.16 | 18.37/16.04 | 7.03/21.23 |
| CIE94 | 17.48/11.64 | 11.77/10.97 | 11.77/10.98 | 1.13/5.69 | 13.10/10.52 | 5.21/14.10 |
| CIEDE2000 | 12.09/10.85 | 10.30/9.41 | 10.59/9.37 | 1.23/5.44 | 11.25/10.18 | 4.85/12.43 |
| SSIM | 0.73/0.85 | 0.86/0.93 | 0.98/0.94 | 0.99/0.96 | 0.80/0.87 | 0.95/0.88 |
| VGG Latent Space Distance | 38.70/42.80 | 34.23/37.29 | 23.12/29.51 | 12.53/29.97 | 37.86/37.41 | 25.25/25.67 |

| Metric | Palette Dif | Clustering | Clustering-NP | Convex Hall | NS | NS-S |
|---|---|---|---|---|---|---|
| FID | 766.17/717.07 | 205.08/215.13 | 217.28/236.68 | 445.31/146.82 | 150.13/147.53 | 69.81/52.21 |
| Histogram-Corr | 0.95/0.98 | 0.95/0.98 | -0.00/0.00 | 0.95/0.98 | 0.95/0.98 | 0.96/0.99 |
| Histogram-Chi2 | 0.82/1.20 | 1160.72/2.03 | 1.85/1118.66 | 143.71/3.60 | 1.13/1.10 | 1.02/18.90 |
| CIELAB | 32.61/27.79 | 9.69/20.75 | 39.02/23.37 | 7.42/16.33 | 31.56/23.39 | 13.82/12.89 |
| CIE94 | 19.73/18.92 | 7.00/12.61 | 31.80/14.86 | 6.14/7.95 | 18.42/14.38 | 9.70/7.60 |
| CIEDE2000 | 16.95/16.65 | 6.43/11.03 | 24.08/13.29 | 5.95/7.06 | 15.87/12.65 | 9.47/8.14 |
| SSIM | 0.79/0.71 | 0.95/0.91 | 0.95/0.92 | 0.98/0.93 | 0.87/0.88 | 0.98/0.96 |
| VGG Latent Space Distance | 50.05/43.90 | 23.22/27.97 | 12.17/25.04 | 15.94/23.81 | 30.56/30.23 | 16.39/21.61 |

Conclusions
Appendix I
Appendix II
Wenxiao Cai:
Yifei Deng: