GSoC’21 Phase-II | Smartpens

Rohan Mallick
5 min readAug 20, 2021

This is an update blog for the project Smartpen @caMicroscope in GSoC’21.

About caMicroscope,

caMicroscope is a web-based biomedical image and data viewer, with a strong emphasis on cancer pathology WSI (Whole Slide Imaging). It is a digital pathology data management, visualization and analysis platform. It consists of a set of web services to manage digital pathology images, associated clinical and imaging metadata, and human/machine generated annotations and markups.

caMicroscope includes several apps such as viewer, heatmap viewer, annotation & label manager, measurement, side-by-side viewer, cross-slide coordinated viewer and image enhancements. caMicroscope also supports advanced features such as segmentation and classification on the slides using tfjs models.

About the project,

Smartpen, This project involves adapting pathology annotation tools to prefer following edges in the base image when close and also add other smart techniques that will make it look perfect. This tool helps the user to draw almost 100% accurate drawing/ annotation free-hand with the help of Computer Vision. The project also includes extending other annotation related facilities as well.

Work done,

Demo

Work Done:

  • Smartpen
  • Moving Points
  • Segmentation Annotation

Find my PRs at the end.

Find my Phase-I blog here.

Find my weekly blogs here.

Journey,

It started with the community bonding period, and now is the submission time. Let me summarize my journey in a nutshell.

Community bonding was quite an interesting period, got to bond with the mentors as well as the org. Starting with the coding phase, as I had already implemented edge detection and a few more things (the tedious part), I was able to implement the basic version of smartpen in the first 2 weeks itself. After an initial review from the mentor, then was the time for optimizations. Thinking of various ways to optimize the smartpen and come up with novel techniques was quite a bit of a challenge. Though, I could add on a few optimizations later on which improved the results (speed and accuracy). After the 3rd week, I sent my first PR and also got an approval from mentor. Then, I started with the Moving Points. Moving points closely resembles the objective of my project. This feature was quite mathematical and required firm logic to implement. I think this feature is quite impactful even though it looks simple.

As I was almost done with my project, I took a few days off. After completion of Phase-1, I started brainstorming some more ideas. It was then, when I thought of extending segmentations as annotations. Even this idea, closely resembles the objective of my project i.e. machine aided annotations. I started with the implementation, but understanding the annotation management took quite some time. Finally, after few design reviews and bug corrections, I got my code approved.

Smartpen,

Smartpen basically aligns any annotation to the nearest/most-optimum boundaries (edges) in the image. Smartpen includes two modes, Real-time & After Draw. In Real-time, annotations are aided in real time as the user draws. In After-Draw, after the user is done with the annotation, the drawing automatically snaps to the edges. There is also an undo button, to undo the snap. Optimum settings can be selected using settings.

Details

Smartpen is extremely lightweight and blazingly fast.

It uses canny edge detection. I didn’t want to use Open-CV because it’s heavy and non-customizable, so I wrote my own canny edge detection module (got multiple pieces of code snippets from multiple opensource repos). Apart from the nearest edge selection, there are a few heuristics which make the curves smooth and realistic. Making this feature real-time was another important task, this is one of the reasons why Computer Vision was preferred. Earlier, it required a lot of time to process all the algorithms, but after using cached edge data it became blazingly fast.

Smartpen is available in the annotations and preset labels tab. But any utility using openseadragon-api can use it. Find more.

Some limitations

  • Smartpen works perfectly when the user is close to the boundaries and the real-time mode is 100% accurate. But as the user moves away from the boundaries, accuracy starts to worsen.
  • Currently, this feature is utilized only by few tools.

Moving Points,

This feature allows the user to drag annotations after drawing them to perfect the shape. This is similar to anchor points, but in my implementation each drawn point is an anchor point.

To drag the figure, one has to simply click any annotation boundary and drag it with the mouse. Find more.

Segmentation Annotation,

Extended the segmentation tab to save segmentations as annotations for future use (for labeling etc). This is a very natural addition to the workflow. To convert segmentations into annotations, I used the contours of the segmentation.

Also, the segmentation tab had no existing annotation display service. Therefore, added annotation overlays for the display. Find more.

Acknowledgment,

I’m really thankful to my mentors: Insiyah Hajoori and Ryan Birmingham for their constant support and help. Their feedback, validation and reviews were extremely valuable. It was a really pleasant journey.

Codes,

My contributions are (all merged):

Issue #523 — Smartpen issue

PR #526 — Basic implementation of smartpen with all the features

PR #532 — Moving Points feature

PR #542 — Segmentation Annotation

Other than GSoC can be seen here.

--

--