Monday, March 1, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Neural Networks

Diagnosing ADHD Using Brain Biomarkers and Machine Learning

April 10, 2020
in Neural Networks
Diagnosing ADHD Using Brain Biomarkers and Machine Learning
586
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

Imagine this: you’re sitting in the waiting room of a hospital, 99.9 percent sure you broke your arm.

You’re expecting for the doctor to inspect your arm, give you an X-ray, then wrap it up in a cast. Sounds pretty standard, right?

You might also like

How AI Can Be Used in Agriculture Sector for Higher Productivity? | by ANOLYTICS

Future Tech: Artificial Intelligence and the Singularity | by Jason Sherman | Feb, 2021

Tackling ethics in AI algorithms: the case of Salesforce | by Iflexion | Feb, 2021

But when you enter the doctor’s office, the doctor just starts asking you a bunch of questions about your arm. He’s asking you where it hurts, if you can bend it in certain directions, etc. etc.

“Nope, it’s not broken,” he says after 30 seconds of being in the room. No inspection, no X-ray, no cast. Just a questionnaire — and you’re almost positive you broke your arm.

…

Makes no sense, right? Why couldn’t the doctor have taken an X-ray, or even have just looked at your arm?

In this scenario, the doctor obviously could have done this — it’s pretty easy to determine if someone has a broken arm. But with other diseases, psychiatric disorders in particular, this is not the case.

As of right now, psychiatric disorders are entirely symptom-based. A patient completes some questionnaire to determine if they have a certain condition, just like your experience at the doctor’s with a broken arm.

This makes no. sense. We’re failing to address the neuroscience and biology underlying mental health disorders like anxiety, depression and ADHD in our diagnoses. Specifically in ADHD, it shows:

There is an abundance of evidence suggesting that ADHD is misdiagnosed / overdiagnosed. According to one study, around 1.1 million children received an inappropriate diagnosis of ADHD in the U.S., and over 800,000 received stimulant medication due only to relative immaturity for their age.

And taking stimulant medication for ADHD when you don’t have it has been shown to have harmful side effects, such as memory less.

It’s clear that our current diagnostic method is inaccurate. We need to start taking a more science-driven approach, actually investigating the root causes of mental disorders like ADHD and diagnosing them objectively.

This is why I’ve been passionately working on using deep learning techniques to diagnose ADHD based on brain biomarkers, specifically functional brain connectivity.

ADHD (short for attention-deficit hyperactivity disorder) is a behavioral disorder characterized by:

  • Restlessness
  • Impulsiveness
  • Inability to focus or concentrate

While ADHD can affect anyone, it’s most commonly diagnosed in children. In fact, around 1 in 20 children in the U.S. have an ADHD diagnosis, being the most common psychiatric disorder in America. As I stated earlier, around 1.1 million of these children received an inappropriate diagnosis.

Despite this, we still aren’t sure of the exact causes, which is why it makes it hard to diagnose. Scientists have hypothesized that those affected by ADHD have underdeveloped brains, perhaps 3 years behind.

This means that some areas of the brain are smaller than average until the brain reaches maturity, such as the amygdala, which plays a role in emotion regulation. Scientists have also found differences in ADHD brains by analyzing functional brain connectivity.

Functional brain connectivity is a novel measurement of brain. Rather than looking at how neurons in the brain are structurally connected, functional brain connectivity is the measure of how different regions of the brain are functionally connected (i.e. the correlation between different parcellated voxels, or nodes, of the brain).

We can gather functional data of the brain using a lot of different methods, but the one most commonly used to date is functional magnetic resonance imaging (fMRI).

Functional magnetic resonance imaging

When different parts of the brain are activated, they are provided with more energy by adjacent capillaries through a process called the hemodynamic response. This supplies the region with increased cerebral blood flow and an increase in oxygen supply.

This process results in a change in terms of the relative levels of oxyhemoglobin and deoxyhemoglobin that can be detected by MR imaging. This imaging approach is called blood oxygen level–dependent (BOLD) contrast imaging. The change in the BOLD signal is the cornerstone of functional MR imaging.

When different parts of the brain are more active than others, more blood goes to that part of the brain which can be detected with an fMRI scan. This technique can be used to study which parts of the brain are activated during specific tasks.

Several studies have shown that there may be functional connectivity differences between controls and ADHD patients in the resting-state, or when the subject is not completing a specific task (i.e. no external stimulus).

Functional integration analysis of resting-state fMRI time series (A and D) → functional connectome

I looked further into this by computing a functional integration analysis of resting-state fMRI time series, or looking at how different parcellated brain regions, represented by nodes in a graph, are correlated with each other, represented by edges, called a static graph analysis.

I compared the results of this between controls and patients with ADHD, the data provided by the ADHD-200 Dataset, investigating biomarkers of ADHD. The data would then be used as input to a deep neural network to diagnose ADHD.

Before extracting the functional connectivity correlation coefficients (the values representing how strong functional connections between brain regions are), we need to first conduct an independent component analysis (ICA) on the raw fMRI data.

ICA is a method to separate a multivariate signal into its additive subcomponents. You can think about it like this: you’re in a noisy room, and you only want to listen to the person talking to you, so you try to filter out all of the other people talking.

The same idea goes for fMRI data — you’re separating a signal with many sources into BOLD subcomponents, one for ROI of the brain. I used Nilearn, a module in Python, to do this:

from nilearn import decomposition# individual component analysis on data
canica = decomposition.CanICA(n_components=20, mask_strategy='background')
canica.fit(adhd_data['func'])
# retrieving the components
components = canica.components_
# projecting components into 3D space
components_img = canica.masker_.inverse_transform(components)
# plotting the default mode network only
plotting.plot_stat_map(image.index_img(components_img, 9), title='DMN')
plotting.show()
# plotting all the components
plotting.plot_prob_atlas(components_img, title='All ICA components')
plotting.show()

The result can be seen below:

Extracted regions of interest (ROIs)

Above is a statistical plotting of the default mode network (DMN), the network active when one has self-referential thoughts, particularly in the resting-state, in addition to the ROIs extracted from ICA.

With Nilearn, I was then able to extract the functional connectivity correlation coefficients between each region of the brain:

Example adjacency matrix from subject 1 (index 0) with label 1 (ADHD subject)

Above is one of the connectivity or adjacency matrices extracted from a subject. There are are 20 rows and 20 columns for all 20 ROIs. The color of a square represents the value of the associated functional connectivity correlation coefficient.

I then averaged each correlation matrix across the control and ADHD patients. The results can be visualized in a functional connectome, a map of functional connectivity represented as edges and nodes, that is below:

Functional connectomes between groups

If you look closely, you can see slight differences between these two groups.I then was able to program and train the neural network with vectorized connectivity matrices from each subject.

I programmed the architecture of the neural network in PyTorch:

from torch import nn, optim
import torch.nn.functional as F
# defining architecture
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(210, 100)
self.fc2 = nn.Linear(100, 50)
self.fc3 = nn.Linear(50, 50)
self.fc4 = nn.Linear(50, 1)
self.dropout = nn.Dropout(0.25)

def forward(self, x):
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc1(x)))

# output so no dropout
x = F.Sigmoid(self.fc4(x))

return x

As can be seen above, 210 functional connectivity coefficients are used as input per subject for the feedforward neural network. Because this is a binary classification problem, the output goes through a sigmoid function to be inputted into the binary cross entropy loss equation defined below:

model = Classifier()
criterion = nn.BCELoss()
optimizer = optim.SGD(model.parameters(), lr=0.003)

Note that I used stochastic gradient descent to minimize the cost function, meaning only one sample is needed to update model parameters.

Backpropagation

A review on backpropagation with stochastic gradient descent can be seen above. The gradient of the error function is calculated. The gradient is simply the vector formed by taking the partial derivatives of the error / loss, in respect to all of the weights (and biases, of course!)

Because this is to decrease the error, the negative gradient of the error function is multiplied by some learning rate, α:

Updating weights

You can see above that the weight is updated by subtracting the partial derivative of the loss in respect to the weight, first multiplied by the learning rate, α. My model’s learning rate is 0.03.

The code can be seen below:

epochs = 30
steps = 0
valid_losses = []
for e in range(epochs):
for inputs, labels in train_loader:
print (inputs.size())

optimizer.zero_grad()

output = model(inputs.float())
loss = criterion(output, labels)
loss.backward()
optimizer.step()

# turning off gradients for validation to save memory and computations
with torch.no_grad():
model.eval()
for inputs, labels in valid_loader:
output = model(inputs.float())
valid_loss = criterion(inputs, labels)
valid_losses.append(valid_loss.item())

model.train()

print("Epoch: {}/{}...".format(e+1, epochs),
"Step: {}...".format(counter),
"Loss: {:.6f}...".format(loss.item()),
"Val Loss: {:.6f}".format(np.mean(val_losses)))

After training and testing the model, it was able to diagnose ADHD accurately approximately 73% of the time. Mind you, the dataset I used is pretty small. The accuracy and generalizability of the model could likely be increased with a larger dataset, hyperparameter tuning and by playing with the architecture of the model.

I’m currently experimenting with different neural architectures to increase the accuracy, as well as collecting more data, including data from different studies, to make the algorithm more generalizable. The next steps of the project are making the inputs multi-phenotypic, incorporating genetic data along with behavioral symptoms.

Not only does this have the potential to make more accurate diagnoses in the future, but it also holds the potential to end stigma surrounding ADHD.

Of parents surveyed who have a child with ADHD, 21% of them reported feeling misunderstood by teachers or primary medical professionals, and 6% of parents mentioned being exposed to negative renunciation by significant others. These accusations led them consider whether they should end their children’s prescribed medication.

This is shocking. However, by gaining a better scientific understanding of these mental disorders like ADHD, this could potentially end the stigma and create a brighter future for the people suffering from any mental disorder worldwide.

Credit: BecomingHuman By: Mikey Taylor

Previous Post

Benefits of GPS Tracking for Enterprise Vehicle Fleets

Next Post

New Cyber Brief: Accelerating the Defense Department’s AI Adoption

Related Posts

How AI Can Be Used in Agriculture Sector for Higher Productivity? | by ANOLYTICS
Neural Networks

How AI Can Be Used in Agriculture Sector for Higher Productivity? | by ANOLYTICS

February 27, 2021
Future Tech: Artificial Intelligence and the Singularity | by Jason Sherman | Feb, 2021
Neural Networks

Future Tech: Artificial Intelligence and the Singularity | by Jason Sherman | Feb, 2021

February 27, 2021
Tackling ethics in AI algorithms: the case of Salesforce | by Iflexion | Feb, 2021
Neural Networks

Tackling ethics in AI algorithms: the case of Salesforce | by Iflexion | Feb, 2021

February 27, 2021
Creative Destruction and Godlike Technology in the 21st Century | by Madhav Kunal
Neural Networks

Creative Destruction and Godlike Technology in the 21st Century | by Madhav Kunal

February 26, 2021
How 3D Cuboid Annotation Service is better than free Tool? | by ANOLYTICS
Neural Networks

How 3D Cuboid Annotation Service is better than free Tool? | by ANOLYTICS

February 26, 2021
Next Post
New Cyber Brief: Accelerating the Defense Department’s AI Adoption

New Cyber Brief: Accelerating the Defense Department’s AI Adoption

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

AI And Automation In HR: The Changing Scenario Of The Business
Data Science

AI And Automation In HR: The Changing Scenario Of The Business

February 28, 2021
Machine learning could aid mental health diagnoses: Study
Machine Learning

Machine learning could aid mental health diagnoses: Study

February 28, 2021
Python vs R! Which one should you choose for data Science
Data Science

Python vs R! Which one should you choose for data Science

February 28, 2021
Can Java be used for machine learning and data science?
Machine Learning

Can Java be used for machine learning and data science?

February 28, 2021
These four new hacking groups are targeting critical infrastructure, warns security company
Internet Security

These four new hacking groups are targeting critical infrastructure, warns security company

February 28, 2021
The Time-Series Ecosystem – Data Science Central
Data Science

The Time-Series Ecosystem – Data Science Central

February 28, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • AI And Automation In HR: The Changing Scenario Of The Business February 28, 2021
  • Machine learning could aid mental health diagnoses: Study February 28, 2021
  • Python vs R! Which one should you choose for data Science February 28, 2021
  • Can Java be used for machine learning and data science? February 28, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates