• Breaking News

    Monday, September 6, 2021

    Medical Imaging Classification Computer Science

    Medical Imaging Classification Computer Science


    Medical Imaging Classification

    Posted: 05 Sep 2021 02:19 PM PDT

    Though not my core area of interest, I've just applied my basic supervised image classification algorithm to an MRI image classification dataset from Kaggle. The task is to classify the type of brain tumor, or absence thereof, given four classes of MRI's. The accuracy is consistently around 100% (using randomized partitions into training / testing datasets), and the runtime over 1503 training rows is 15 seconds (including pre-processing) running on an iMac.

    I've also applied supervised clustering to the same dataset. This would allow doctors to quickly pull the most similar brain scans, which is something you can't do by hand with a large number of patients. Clustering the entire testing dataset of 376 rows in this case took about 10 seconds, running on an iMac.

    This approach should work for any single image classification dataset, where physical structure, or coloring, is indicative of disease. I'm in the process of applying the same techniques to a skin cancer dataset, and will write a formal paper on the topic shortly.

    Code and pictures here:

    https://derivativedribble.wordpress.com/2021/09/03/medical-imaging-classification/

    The underlying software is explained here:

    https://www.researchgate.net/publication/346009310_Vectorized_Deep_Learning

    NOTE there's a comment about overfitting given the accuracy, but I'll note that this algorithm performs extremely well generally for single object classification, e.g., 99.971% accuracy for the MNIST Numerical Dataset.

    Moreover, there is no fitting at all, there's just one training algorithm -

    This is the exact same algorithm, that trains in the same way, and it's AutoML.

    There's also a comment suggesting the algorithm only works for "easy" datasets, or "easy" rows, which is not the case. It instead knows in advance that a prediction is likely to be wrong (on a blind basis, based solely upon training data), and doesn't attempt to predict in that case, which is plainly not overfitting, because it doesn't attempt to change the model on account of testing rows that don't have precedent in the dataset.

    submitted by /u/Feynmanfan85
    [link] [comments]

    Can someone please explain this code, especially lines 4-9. It’s a bit confusing.

    Posted: 06 Sep 2021 04:04 AM PDT

    Question about multiple "Single" Travelling Salesman Problem

    Posted: 23 Aug 2021 10:46 PM PDT

    Bitbanging 1D Reversible automata (has pics)

    Posted: 23 Aug 2021 11:33 AM PDT

    No comments:

    Post a Comment