Groovy Apache Groovy

Entries tagged [eclipsecollections]

Fruity Eclipse Collections

by paulk

Posted on Thursday October 13, 2022 at 11:05AM in Technology

This blog post continues on to some degree from the previous post, but instead of deep learning, we'll look at clustering using k-means after first exploring some top methods of Eclipse Collections with fruit emoji examples.

Eclipse Collections Fruit Salad

First, we'll define a Fruit enum (it adds one additional fruit compared to the related Eclipse Collections kata):

code for fruit enum

We can use this enum in the following examples:


The last example calculates red fruit in parallel threads. As coded, it uses virtual threads when run on JDK19 with preview features enabled. You can follow the suggestion in the comment to run on other JDK versions or with normal threads. In addition to Eclipse Collections, we have the GPars library on our classpath. Here we are only using one method which is managing pool lifecycle for us.

Exploring emoji colors

For some fun, let's look at whether the nominated color of each fruit matches the color of the related emoji. As in the previous blog, we'll use the slightly nicer Noto Color Emoji fonts for our fruit as shown here:

2022-10-12 14_16_42-Noto Color Emoji - Google Fonts.png

We'll use an Eclipse Collection BiMap to switch back and forth between the color names and java.awt colors:

@Field public static COLOR_OF = BiMaps.immutable.ofAll([
@Field public static NAME_OF = COLOR_OF.inverse()

We are also going to use some helper functions to switch between RGB and HSB color values:

static hsb(int r, int g, int b) {
float[] hsb = new float[3]
RGBtoHSB(r, g, b, hsb)

static rgb(BufferedImage image, int x, int y) {
int rgb = image.getRGB(x, y)
int r = (rgb >> 16) & 0xFF
int g = (rgb >> 8) & 0xFF
int b = rgb & 0xFF
[r, g, b]

The HSB color space represents colors in a spectrum from 0 to 360 degrees:

Color Circle, Credit:

Image credit:

We have two helper methods to assist with colors. The first picks out "mostly black" and "mostly white" colors while the second uses a switch expression to carve out some regions of the color space for our colors of interest:

static range(float[] hsb) {
if (hsb[1] < 0.1 && hsb[2] > 0.9) return [0, WHITE]
if (hsb[2] < 0.1) return [0, BLACK]
int deg = (hsb[0] * 360).round()
return [deg, range(deg)]

static range(int deg) {
switch (deg) {
case 0..<16 -> RED
case 16..<35 -> ORANGE
case 35..<75 -> YELLOW
case 75..<160 -> GREEN
case 160..<250 -> BLUE
case 250..<330 -> MAGENTA
default -> RED

Note that the JDK doesn't have a standard color of PURPLE, so we combine purple with magenta by choosing an appropriate broad spectrum for MAGENTA.

We used a Plotly 3D interactive scatterplot (as supported by the Tablesaw Java dataframe and visualization library) to visualize our emoji colors (as degrees on the color spectrum) vs the XY coordinates:

2022-10-13 20_04_10-Color vs xy.png

We are going to try out 3 approaches for determining the predominant color of each emoji:

  1. Most common color: We find the color spectrum value for each point and count up the number of points of each color. The color with the most points will be selected. This is simple and works in many scenarios but if an apple or cherry has 100 shades of red but only one shade of green for the stalk or a leaf, green may be selected.
  2. Most common range: We group each point into a color range. The range with the most points will be selected.
  3. Centroid of biggest cluster: We divide our emoji image into a grid of sub-images. We will perform k-means clustering of the RGB values for each point in the sub-image. This will cluster similar colored points together in a cluster. The cluster with the most points will be selected and its centroid will be chosen as the selected pre-dominant color. This approach has the affect of pixelating our sub-image by color. This approach is inspired by this python article.

Most Common Color

Ignoring the background white color, the most common color for our PEACH emoji is a shade of orange. The graph below shows the count of each color:

2022-10-17 15_57_40-Color histogram for PEACH.png

Most Common Range

If instead of counting each color, we group colors into their range and count the numbers in each range, we get the following graph for PEACH:

2022-10-17 15_56_58-Range histogram for PEACH.png


K-Means is an algorithm for finding cluster centroids. For k=3, we would start by picking 3 random points as our starting centroids.


We allocate all points to their closest centroid:


Given this allocation, we re-calculate each centroid from all of its points:


We repeat this process until either a stable centroid selection is found, or we have reached a certain number of iterations.

We used the K-Means algorithm from Apache Commons Math.

Here is the kind of result we would expect if run on the complete set of points for the PEACH emoji. The black dots are the centroids. It has found one green, one orange and one red centroid. The centroid with the most points allocated to it should be the most predominant color. (This is another interactive 3D scatterplot.)


We can plot the number of points allocated to each cluster as a bar chart. (We used a Scala plotting library to show Groovy integration with Scala.)

2022-10-17 16_56_28-Centroid sizes for PEACH.png

The code for drawing the above chart looks like this:

var trace = new Bar(intSeq([1, 2, 3]), intSeq(sizes))
.withMarker(new Marker().withColor(oneOrSeq(colors)))

var traces = asScala([trace]).toSeq()

var layout = new Layout()
.withTitle("Centroid sizes for $fruit")

Plotly.plot(path, traces, layout, defaultConfig, false, false, true)

K-Means with subimages

The approach we will take for our third option enhances K-Means. Instead of finding centroids for the whole image as the graphs just shown do, we divide the image into subimages and perform the K-Means on each subimage. Our overall pre-dominant color is determined to be the most common color predicated across all of our subimages.

Putting it all together

Here is the final code covering all three approaches (including printing some pretty images highlighting the third approach and the Plotly 3D scatter plots):

var results = Fruit.ALL.collect { fruit ->
var file = getClass().classLoader.getResource("${}.png").file as File
var image =

var colors = [:].withDefault { 0 }
var ranges = [:].withDefault { 0 }
for (x in 0..<image.width) {
for (y in 0..<image.height) {
def (int r, int g, int b) = rgb(image, x, y)
float[] hsb = hsb(r, g, b)
def (deg, range) = range(hsb)
if (range != WHITE) { // ignore white background
var maxRange = ranges.max { e -> e.value }.key
var maxColor = range(colors.max { e -> e.value }.key)

int cols = 8, rows = 8
int grid = 5 // thickness of black "grid" between subimages
int stepX = image.width / cols
int stepY = image.height / rows
var splitImage = new BufferedImage(image.width + (cols - 1) * grid, image.height + (rows - 1) * grid, image.type)
var g2a = splitImage.createGraphics()
var pixelated = new BufferedImage(image.width + (cols - 1) * grid, image.height + (rows - 1) * grid, image.type)
var g2b = pixelated.createGraphics()

ranges = [:].withDefault { 0 }
for (i in 0..<rows) {
for (j in 0..<cols) {
def clusterer = new KMeansPlusPlusClusterer(5, 100)
List<DoublePoint> data = []
for (x in 0..<stepX) {
for (y in 0..<stepY) {
def (int r, int g, int b) = rgb(image, stepX * j + x, stepY * i + y)
var dp = new DoublePoint([r, g, b] as int[])
var hsb = hsb(r, g, b)
def (deg, col) = range(hsb)
data << dp
var centroids = clusterer.cluster(data)
var biggestCluster = centroids.max { ctrd -> ctrd.points.size() }
var ctr =*.intValue()
var hsb = hsb(*ctr)
def (_, range) = range(hsb)
if (range != WHITE) ranges[range]++
g2a.drawImage(image, (stepX + grid) * j, (stepY + grid) * i, stepX * (j + 1) + grid * j, stepY * (i + 1) + grid * i,
stepX * j, stepY * i, stepX * (j + 1), stepY * (i + 1), null)
g2b.color = new Color(*ctr)
g2b.fillRect((stepX + grid) * j, (stepY + grid) * i, stepX, stepY)

var swing = new SwingBuilder()
var maxCentroid = ranges.max { e -> e.value }.key
swing.edt {
frame(title: 'Original vs Subimages vs K-Means',
defaultCloseOperation: DISPOSE_ON_CLOSE, pack: true, show: true) {
label(icon: imageIcon(image))
label(icon: imageIcon(splitImage))
label(icon: imageIcon(pixelated))

[fruit, maxRange, maxColor, maxCentroid]

println "Fruit Expected By max color By max range By k-means"
results.each { fruit, maxRange, maxColor, maxCentroid ->
def colors = [fruit.color, maxColor, maxRange, maxCentroid].collect {
println "${fruit.emoji.padRight(6)} $colors"

Here are the resulting images:

2022-10-13 20_37_25-Original.png

2022-10-13 20_37_08-Original.png

2022-10-13 20_36_49-Original.png

2022-10-13 20_36_27-Original.png

2022-10-13 20_36_07-Original.png

2022-10-13 20_35_21-Original.png

And, here are the final results:

final results

In our case, all three approaches yielded the same results. Results for other emojis may vary.

Further information

Deep Learning and Eclipse Collections

by paulk

Posted on Tuesday October 11, 2022 at 10:41AM in Technology

DeepLearning4J and Eclipse Collections revisited

In previous blogs, we have covered Eclipse Collections and Deep Learning. Recently, a couple of the highly recommended katas for Eclipse Collections have been revamped to include "pet" and "fruit" emojis for a little bit of extra fun. What could be better than Learning Eclipse Collections? Deep Learning and Eclipse Collections of course!

First, we create a PetType enum with the emoji toString, and then Pet and Person records. We'll populate a people list as is done in the kata. The full details are in the repo.

Let's use a GQuery expression to explore the pre-populated list:

println GQ {
from p in people
select p.fullName, p.pets

The result is:

deep-learning-eclipse-collections pre-populated field

Now let's duplicate the assertion from the getCountsByPetType test in exercise3 which checks pet counts:

2022-10-11 20_06_12-Groovy web console.png

As we expect, it passes.

Now, for a bit of fun, we will use a neural network trained to detect cat and dog images and apply it to our emojis. We'll follow the process described here. It uses DeepLearning4J to train and then use a model. The images used to train the model were real cat and dog images, not emojis, so we aren't expecting our model to be super accurate.

The first attempt was to write the emojis into swing JLabel components and then save using a buffered image. This lead to poor looking images:


And consequently, poor image inference. Recent JDK versions on some platforms might do better but we gave up on this approach.

Instead, emoji image files from the Noto Color Emoji font were used and saved under the pet type in the resources folder. These look much nicer:

2022-10-11 18_24_38-Noto Color Emoji - Google Fonts.png

Here is the code which makes use of those saved images to detect the animal types (note the use of type aliasing since we have two PetType classes; we rename one to PT):

import as PT

var vg16ForCat = new VG16ForCat().tap{ loadModel() }
var results = []
people.each{ p ->
results << p.pets.collect { pet ->
var file = new File("resources/${}.png")
PT petType = vg16ForCat.detectCat(file, 0.675d)
var desc = switch(petType) {
case PT.CAT -> 'is a cat'
case PT.DOG -> 'is a dog'
default -> 'is unknown'
"$ $desc"
println results.flatten().join('\n')

Note that the model exceeds the maximum allowable size for normal github repos, so you should create it following the original repo instructions and then store the resulting in the resources folder.

When we run the script, we get the following output:

[main] INFO org.nd4j.linalg.factory.Nd4jBackend - Loaded [CpuBackend] backend
[main] INFO org.nd4j.linalg.api.ops.executioner.DefaultOpExecutioner - Blas vendor: [OPENBLAS]
VertexName (VertexType)                 nIn,nOut       TotalParams    ParamsShape                    Vertex Inputs
input_1 (InputVertex)                   -,-            -              -                              -
block1_conv1 (Frozen ConvolutionLayer)  3,64           1792           b:{1,64}, W:{64,3,3,3}         [input_1]
block1_conv2 (Frozen ConvolutionLayer)  64,64          36928          b:{1,64}, W:{64,64,3,3}        [block1_conv1]
block1_pool (Frozen SubsamplingLayer)   -,-            0              -                              [block1_conv2]
block2_conv1 (Frozen ConvolutionLayer)  64,128         73856          b:{1,128}, W:{128,64,3,3}      [block1_pool]
block2_conv2 (Frozen ConvolutionLayer)  128,128        147584         b:{1,128}, W:{128,128,3,3}     [block2_conv1]
block2_pool (Frozen SubsamplingLayer)   -,-            0              -                              [block2_conv2]
block3_conv1 (Frozen ConvolutionLayer)  128,256        295168         b:{1,256}, W:{256,128,3,3}     [block2_pool]
block3_conv2 (Frozen ConvolutionLayer)  256,256        590080         b:{1,256}, W:{256,256,3,3}     [block3_conv1]
block3_conv3 (Frozen ConvolutionLayer)  256,256        590080         b:{1,256}, W:{256,256,3,3}     [block3_conv2]
block3_pool (Frozen SubsamplingLayer)   -,-            0              -                              [block3_conv3]
block4_conv1 (Frozen ConvolutionLayer)  256,512        1180160        b:{1,512}, W:{512,256,3,3}     [block3_pool]
block4_conv2 (Frozen ConvolutionLayer)  512,512        2359808        b:{1,512}, W:{512,512,3,3}     [block4_conv1]
block4_conv3 (Frozen ConvolutionLayer)  512,512        2359808        b:{1,512}, W:{512,512,3,3}     [block4_conv2]
block4_pool (Frozen SubsamplingLayer)   -,-            0              -                              [block4_conv3]
block5_conv1 (Frozen ConvolutionLayer)  512,512        2359808        b:{1,512}, W:{512,512,3,3}     [block4_pool]
block5_conv2 (Frozen ConvolutionLayer)  512,512        2359808        b:{1,512}, W:{512,512,3,3}     [block5_conv1]
block5_conv3 (Frozen ConvolutionLayer)  512,512        2359808        b:{1,512}, W:{512,512,3,3}     [block5_conv2]
block5_pool (Frozen SubsamplingLayer)   -,-            0              -                              [block5_conv3]
flatten (PreprocessorVertex)            -,-            -              -                              [block5_pool]
fc1 (Frozen DenseLayer)                 25088,4096     102764544      b:{1,4096}, W:{25088,4096}     [flatten]
fc2 (Frozen DenseLayer)                 4096,4096      16781312       b:{1,4096}, W:{4096,4096}      [fc1]
predictions (OutputLayer)               4096,2         8194           b:{1,2}, W:{4096,2}            [fc2]
            Total Parameters:  134268738
        Trainable Parameters:  8194
           Frozen Parameters:  134260544
Tabby is a cat
Dolly is a cat
Spot is a dog
Spike is a dog
Serpy is a cat
Tweety is unknown
Speedy is a dog
Fuzzy is unknown
Wuzzy is unknown

As we can see, it correctly predicted the cats (Tabby and Dolly) and dogs (Spot and Spike) but incorrectly thought a snake (Serpy) was a cat and a turtle (Speedy) was a dog. Given the lack of detail in the emoji images compared to the training images, this lack of accuracy isn't unexpected. We could certainly use better images or train our model differently if we wanted better results but it is fun to see our model not doing too badly even with emojis!