Source photo: https://www.linkedin.com/posts/illya-kuys-304264134_delaware-delawarebelux-innovation-activity-6901884365278625792-tCgd
Source logo: https://www.collabdays.org/2020-benelux-online/sponsors

On the 22nd and 23rd February 2022, Delaware organized a project for the students of Applied Computer Science at VIVES University. The main goal was to create a program that could recognize the type of medication and its dose on a picture. In addition, it also had to be able to tell the nurse if the patient is allowed to get this medicine. Text recognition on photos was thus the main focus of the project.

For this project, VIVES divided our classes (AI, Business, Software, and Networks) into 10 mixed groups. That way, just about every group had someone of each discipline. 5 groups had to work on the Delaware case, while the other 5 got the Cinionic case, where they had to predict sudden failures of smart projectors.

Day 1 – Difficult doesn’t have to mean impossible

Introduction

We began the first day with a quick introduction to the project. Delaware explained in this introduction that hospitals and care centers deploy additional staff to mitigate human errors. When a patient needs medication, the hospital or care center will send 2 nurses to the patient to give them their medicine. The extra pair of eyes should avoid patients getting the wrong medication. But the problem with this solution is that staff costs a lot of money.

Here is where we came in. We had to find a smart solution that uses text recognition to recognize the type of medication and its dose. That program should then check the database with all the patients to ensure that the patient the nurse wants to give this medication to is allowed to get this medication.

Getting started

It certainly wasn’t easy to get started, as we hadn’t learned about text recognition yet. But difficult doesn’t have to mean impossible.

Before actually thinking about how we wanted our product to look like, we first tried to recognize the text on the boxes of medicine that Delaware brought with them. We first tried to implement Tesseract in Python to do this text recognition. Tesseract is an Optical Character Recognition (OCR) engine for various operating systems. Using this tool, we quickly got some results. The downside was that those results weren’t very accurate.

After some pasta, we tried to optimize the results of Tesseract. This took up the rest of the day.

We ended our first day with some delicious pizza.

Fun fact

The first day of our project, 22nd February 2022, was a palindrome day. Those days are very uncommon. In the previous century, there were no palindrome days. In this century, there were 29 (if you write the date as dd-mm-yyyy). For the next one, it will be another 8 years of waiting: 03-02-2030.

A palindrome date is the date of a day in numerals that is the same from front to back as it is from back to front. Depending on the date format, dd-mm-yyyy (e.g. in many European countries), mm-dd-yyyy (e.g. in the United States) or yyyy-mm-dd (e.g. ISO 8601 and Japan), days then are or are not palindrome days.

nl.wikipedia.org

Day 2 – It was very close, but we made it

Shaping our product

On the second day, we got some croissants. After our breakfast, we shaped our final product.

Emiel from my group improved our text recognition by replacing Tesseract with Azure’s OCR. This solution was much slower than Tesseract, as Azure’s OCR functions as an API. But the results of Azure’s OCR were a lot better and didn’t need many optimizations.

Meanwhile, I tried to figure out what our product could look like.

At first, I thought of a device that would be put next to each patient’s bed. That way, those devices could easily be linked to the correct patient. But I immediately knew that this couldn’t be our final solution, as this would be very expensive for hospitals and care centers. And our main goal was to find a smart solution that would reduce the current costs. So, I had to come up with another solution.

After some discussions with my other teammate, Briek, I came up with what would be our final solution: a smart scanner with which a nurse could first scan a patient and then scan the patient’s medication. That way, hospitals and care centers would only need one device for each working nurse. By using a device that looks like a self scanner, scanning the medication would also be a lot easier for the nurses than with our first solution.

Screenshot of a slide from the PowerPoint used in our presentation

When our idea about our final product was ready, we designed our interface and worked out how hospitals could concretely use our product. We then came up with a wall of smart scanners, similar to the wall of self-scanners one can find in a supermarket. At the beginning of their workday, nurses can scan their badges at the wall. If the badge is scanned correctly, the smart scanner that is charged the most will then light up. This scanner will then automatically be linked to the nurse. Before giving a patient their medication, nurses need to scan the patient first by scanning their wristband. After that, the nurse can scan the medication. The smart scanner will automatically check the database. If the database shows the patient is allowed to get this medication, the nurse gets approval. Otherwise, the smart scanner will show a warning.

With our final solution, the administration of the hospital or care center will automatically know which nurse gave which patient what medication.

Finishing our product

When our final idea was ready, it was time to finish our product.

Emiel finished the backend by improving our text recognition. For this, we only needed an extra function that could map the results of this API to the names of actual medications. That way, when the API recognized some medication as ‘aracetamol’, our program knew it actually meant ‘Paracetamol’.

While Emiel was working on that, Briek made the PowerPoint for our presentation and worked out some ideas for further improvements, while I started to make the interface for our demo.

Because we had quite some problems with sending pictures to the API, our interface and backend were only linked 20 minutes before our presentation. It was thus very close, but we made it. Although we had only gotten a day and a half for this project, we finished just in time.

Presenting our results

Around 2:30 pm it was time to present our minimal product.

After our presentation, we got some questions and some feedback. One of our lecturers suggested also saving the warnings about the wrong medication or dose. That way, the administration of the hospital or care center knows which mistakes should be prevented by the smart scanner and which nurse made them.

After all presentations for both projects were over, it was time for the concluding reception. This reception gave the people from Delaware and Cinionic some time to choose the winning team of their challenge.

After a short while, Delaware announced us as the winners of their project. The main reason we won was that we made the most progress in this short amount of time. After hearing this, our team, of course, congratulated ourselves.

After that, we chatted with the other groups about their experiences with the challenges and of course also networked a bit with the people from Delaware and Cinionic.

Interview with VIVES

Only 2 days after our project, the winning groups were contacted per mail by VIVES, asking for an interview.

At first, we hesitated a little, as all of us are quite shy in front of a camera, but eventually, we gave in as we thought it might be a good experience.

Coming Soon…

VIVES interview

It was certainly weird to talk to a camera about our project, but it also was a unique experience. Unfortunately, as of this day, VIVES still hasn’t uploaded the video of our interview. They promised to upload this video after the Easter holiday. When they do, I will update this blog post.

But even if they don’t upload our interview, at least we had some fun that day 😉

Reflection

Reflection on our product

Of course, our final solution isn’t flawless. Even after a warning of the smart scanner, nurses can still give patients that medication. And if nurses don’t use our smart scanner at all, they might still give the wrong medicine to the wrong patient (by mistake). But this can be checked by the stock administration of the hospital or care center as we would recommend that only by using the smart scanner the given doses are logged in the database. That way, validations to the use of this smart scanner can be found out by inaccurate stock.

Our lecturer also made a good point when saying we should also save the warnings. It is very important in cases such as health to stay on top of everything.

But in general, I am proud to say that, overall, we made a strong business case.

Self-reflection

As we didn’t have any experience with text recognition beforehand, I am very proud we were able to demonstrate a minimal product at our presentation. I certainly didn’t expect to win – especially because we only finished our project 20 minutes before the presentation. But I would be proud nonetheless.

I do regret not working on the backend as that would be very educational, but we already had Emiel and Briek on the backend. Besides, in the first hours of the first day, I struggled a lot with Python which suddenly didn’t work anymore on my laptop. But I did learn a lot by discussing the problems Emiel experienced with Tesseract and Azure’s OCR.

On the other hand, this was my first time creating an application with Python. I can’t say that I am a huge fan, but it was certainly an interesting experience.

Anyway, as soon as I get the time, I will look at Emiel’s code and try to understand it to the core, as I am very curious about its working and didn’t get the time during the challenge.

Overall, it was a great experience and I can’t wait to learn even more about AI!

Intelligent medics (OCR challenge) – Delaware

Post navigation