Chloe McAree (McAteer)
Published on

OpenHack unpack!

Authors

Last week I boarded the train in the scorching heat and travelled to Dublin, to view the magnificent new Microsoft office and to take part in OpenHack, with one of my Kainos colleagues. OpenHack is a machine learning themed hackathon which consisted of a variety of challenges with a main focus on computer vision.

Microsoft atrium ft. digital lake

On arrival I was in awe at the massive, colourful and very impressive office. After registration, some light breakfast and meeting the Microsoft team, we got stuck into getting our environment set up. We set up a Data Science Virtual Machine on Azure which included JupyterHub. This made it easier for us to collaborate as a team when creating our solutions. I had never used Azure before and starting out I was a little overwhelmed by its space ship like qualities, with endless amounts of services! As a team we managed to navigate through the documentation and were able to get everything we needed up and running.

Background to the challenges:

Computer vision is all about getting programs to process images and extract meaningful content from them. We focused more on the image classification side of computer vision — for example being able to identify there is a cat in a picture or being able to tell which pictures have cats and which pictures have dogs.

Some machine learning solutions require a lot of time, are extremely complex and are expensive. This is why our first challenge was to use a pre-built solution that took advantage of transfer learning. Transfer learning basically allows us to use a pre-trained model and then make some changes so that it can classify what we want it to.

Let the challenges begin!

For the first challenge we used Custom Vision, which is a Microsoft service that allows you to build image classifiers that takes advantage of transfer learning. Custom Vision was great to use and easy to understand. It allows you to upload multiple groups of images and add a tag to each group. You can also carry out a quick test by uploading some images that weren’t part of your training data to see if the predicted classifications are correct.

The Custom Vision service is great if you’re trying to build a simple proof of concept or prototype. I found it especially useful that we didn’t have to clean our data set for using it (more on cleaning later), we could just simply upload the images and let custom vision handle it.

Testing our custom vision model

After completing the first challenge, it was time to get our hands dirty with creating our own model. Before we could dive into this, we needed to clean our data — as all of the images had different properties such as size, shape and pixel ranges. This challenge was all about preprocessing our data into a clean normalized format, so that all the images would have consistent properties.

This took us a quite a while to get sorted and was a little confusing, as some of the changes were hard to see with the human eye, e.g. the different pixel ranges. To allow us to see if our code was actually working or not, we displayed the images as histograms.

Viewing our images as histograms

After getting some experience using the Microsoft congnitive services and getting started with cleaning a large dataset, it was time to take a break for the day and go out to enjoy a BBQ on the Mircosoft terrace overlooking the Leopardstown racecourse.

Bon appétit!

Surely there’s an algorithm to do that?

Day 2! Data was cleaned, spirits were high and coffee was flowing. Time to jump into the next challenge and take the complexity to the next level. Our task was to use powerful APIs for machine learning.

We started investigating classical/traditional machine learning and realised that Sci-kit learn provides implementations of many different algorithms. We just had to figure out which one was best for us. After some research we decided as a team that for our use case, the most appropriate one for us would be RandomForest. RandomForest basically creates a number of decision trees with different possible options and makes its prediction based on them.

The best thing about using Sci-kit learn was that the documentation was great. With our algorithm in place and working, it was giving us just above 80% precision. While we were really pleased with this, we wanted to take it another step further. We wanted to see if we could push this to over 90%…enter deep learning.

80%? We want more!

Classical/traditional machine learning was able to make predictions for us, but deep learning is better suited for a complex image data set and would be able to help us increase our accuracy.

This challenge wanted us to create a Convolutional Neural Network. CNNs are a type of deep learning architecture that is widely used in image classification. As this was a complicated task, our team mentor took to the Microsoft Surface Hub and gave us a bit of an explanation into deep learning and neural networks.

For this challenge we used TensorFlow to build our deep learning algorithms. It took us some time to research Tensorflow and to try and figure out how it could help us. Combining our research and help from our mentor we were able to get it implemented and achieve our goal of having over 90% precision.

Working on our solutions as a team.

Conclusion

OpenHack has shown me different machine learning approaches — from using basic cognitive services to advanced deep learning techniques. I was able to solve the different challenges throughout the three days whilst building upon my machine learning knowledge.

Overall, Microsoft’s OpenHack event was amazing. It was a great introduction into machine learning and has motivated me to continue persuing my interest in this area. Although it was a steep learning curve, I have learnt a lot and really enjoyed myself. I hope to be able to take this new knowldege and apply it to future projects whilst also sharing it with my colleagues in Kainos.

Finally, my three key points to take away would be:

  1. You don’t have to dive straight into deep learning. Depending on what your use case is you might be able to solve your problem using a cloud service such as Microsoft’s Custom Vision.

  2. Make sure you take the time to clean your data, as it will save you time in the future and give you more accurate results.

  3. Spend time choosing your algorithm carefully, keep in mind your use case as well as your data set. Here are some tools I found useful— Microsofts algorithm cheat sheet and Sci-Kit learns algorithm map .