177 — Deep Learning based Retinal OCT Segmentation
Read on 13 February 2018Another paper with authors in my group at APL! We do cool things.
I’ve written before about OCT, and the difficulties of automatically segmenting these sometimes very noisy images.
But APL fixed all of our problems!
Using OCT images from a public dataset, the researchers trained a fully convolutional neural network on two experts’ annotations. By asking two experts to annotate, it’s possible to determine the inter-operator agreement, or how close two humans would get given the task we’re giving to the net. In some sense, this gives us a task “difficulty” measure which comes in handy when determining if a model is good, or just very overfit.
It’s important to note that the images used in this work are a mix of diseased-state and healthy controls. This is important because correctly determining the layer boundaries between retinal layers in a healthy individual is a very different task than doing the same in an individual with a retinopathy.
The results? This automatic segmentation often outperforms the current state-of-the-art, and in some cases even outperforms expected human performance.
Though it’s a bit premature to deploy this work to a clinical setting, this means that research OCT work requires dramatically less human ground-truthing than before, because this method performs comparably with a trained human, and likely always better than an untrained human.