Aksara Jawa, or the Javanese Script is the core of writing the Javanese language and has influenced various other regional languages such as Sundanese, Madurese, etc. The script is now rarely used on a daily basis, but is sometimes taught in local schools in certain provinces of Indonesia.
Specific Form of Aksara
The Javanese Script which we will be classifiying is specifically Aksara Wyanjana’s Nglegena, or its basic characters. The list consists of 20 basic characters, without their respective Pasangan characters.
Since I have not been able to find a handwritten Javanese Script dataset on the internet, I have decided to contact one of my English highschool teachers who has once showed my class her ability to write Javanese Script. The characters were written on paper, scanned, and edited manually. Credits to Mm. Martha Indrati for the help!
This project is very much inspired from datasets like MNIST and QMNIST which are handwritten digits and is a go-to dataset for starting to learn image classification. The end goal of this project is to be able to create a deep learning model which will be able to classify handwritten Javanese Script to a certain degree of accuracy.
The main framework to be used is fastai-v2, which sits on top of PyTorch. Fastai-v2 is still under development as of the time of this writing, but is ready to be used for basic image classification tasks.
from fastai2.vision.all import * import torch
The data has been grouped per class folder, which we’ll load up and later split into training (70%) and validation (30%) images.
path = Path("handwritten-javanese-script-dataset")
Notice we’re using a small batch size of 5, mainly because we only have 200 images in total.
Here we’ll apply cropping and resizing as transformations to our image since most of the characters do not fully occupy the image size. Additionally, we’ll resize to 128px.
dblock = DataBlock(blocks = (ImageBlock(cls=PILImageBW), CategoryBlock), get_items = get_image_files, splitter = GrandparentSplitter(valid_name='val'), get_y = parent_label, item_tfms = [CropPad(90), Resize(128, method=ResizeMethod.Crop)])
dls = dblock.dataloaders(path, bs=5, num_workers=0)
There are only 20 types of characters in the type of Aksara which we’ll be classifying.
We’ll be using XResNet50 as the model, which is based on the Bag of Tricks paper and is an “extension” to the ResNet50 architecture. We’ll pass our data, tell which metrics we’d like to observe, utilize
LabelSmoothingCrossEntropy, and add
MixUp as our callback.
learn = Learner(dls, xresnet50(c_in=1, n_out=dls.c), metrics=accuracy, loss_func=LabelSmoothingCrossEntropy(), cbs=MixUp)
With all things in place, let’s finally train the model to learn from the given dataset and predict which class the image belongs to.
learn.fit_one_cycle(30, 3e-4, cbs=SaveModelCallback(monitor='accuracy', fname='best_model'), wd=0.4)
Better model found at epoch 0 with accuracy value: 0.05000000074505806. Better model found at epoch 1 with accuracy value: 0.3333333432674408. Better model found at epoch 2 with accuracy value: 0.38333332538604736. Better model found at epoch 5 with accuracy value: 0.5333333611488342. Better model found at epoch 9 with accuracy value: 0.6499999761581421. Better model found at epoch 12 with accuracy value: 0.8333333134651184. Better model found at epoch 19 with accuracy value: 0.8999999761581421. Better model found at epoch 21 with accuracy value: 0.9333333373069763. Better model found at epoch 22 with accuracy value: 0.949999988079071.
After training, let’s see how well our model learned. Any incorrect prediction in a random batch will have its label colored red.
Instead of only viewing a batch, let’s analyze the results from the entire validation dataset.
interp = ClassificationInterpretation.from_learner(learn)
This confusion matrix lists all the actual versus predicted labels. The darker the blue on the diagonal line, the better our model is at predicting.
On the other hand, this type of interpretation shows several of the predicted images, what our model thinks it is, and how confident it is with that prediction.
Predicting External Images
To see how our model’s regularization fairs, let’s attempt to feed it an external data and see what it predicted.
from PIL import Image
def open_image_bw_resize(source) -> PILImageBW: return PILImageBW(Image.open(source).resize((128,128)).convert('L'))
The following character is supposed to be ma and was picked randomly from available images on the internet.
test0 = open_image_bw_resize('test-image-0.jpg') test0.show()
<matplotlib.axes._subplots.AxesSubplot at 0x1e8960ffaf0>
Feed it through the model and see its output.
Luckily, the model was able to predict the character correctly. To challenge the model even more, I tried to write Javanese Script characters myself and see what the model predicts. Do note that I do not have any background in writing Javanese Scripts, so pardon my skills.
The following character is supposed to be ca.
test1 = open_image_bw_resize('test-image-1.jpg') test1.show()
<matplotlib.axes._subplots.AxesSubplot at 0x1e895ef6610>
This character is supposed to be wa.
test2 = open_image_bw_resize('test-image-2.jpg') test2.show()
<matplotlib.axes._subplots.AxesSubplot at 0x1e8c2a21580>
Well that’s an incorrect guess, which is reasonable firstly because of my poor handwriting skills, and secondly the model was trained on a person’s particular style of handwriting - which in this case is my teacher’s. There could be many other factors which caused the incorrect guess, such as overfitting by the model, small dataset and possibly more.
There are several possible improvements which could be made, one of which is to increase the variety and the size of the dataset, since the model is only training on a single person’s handwriting. It’ll be better in terms of regularization to add other people’s handwriting into the mix as well.
That’s it for this mini project of mine. Thanks for your time and I hope you’ve learned something!