-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
408 lines (362 loc) · 29.4 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<head>
<link rel="shortcut icon" href="myIcon.ico">
<meta http-equiv="Content-Type" content="text/html;charset=utf-8" />
<meta name="keywords" content="S. M. Kamrul Hasan, S. M. Kamrul Hasan, Center for Imaging Science, RIT">
<meta name="description" content="S. M. Kamrul Hasan's home page">
<meta name="google-site-verification" content="X2QFrl-bPeg9AdlMt4VKT9v6MJUSTCf-SrY3CvKt4Zs" />
<link rel="stylesheet" href="jemdoc.css" type="text/css">
<title>S. M. Kamrul Hasan's Homepage</title>
<!-- Google Analytics -->
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-159069803-1', 'auto');
ga('send', 'pageview');
</script>
<!-- End Google Analytics -->
<!--
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-87320911-1', 'auto');
ga('send', 'pageview');
</script>
-->
</head>
<body>
<div id="layout-content" style="margin-top:25px">
<a href="https://github.com/SMKamrulHasan" class="github-corner"><svg width="80" height="80" viewBox="0 0 250 250" style="fill:#FD6C6C; color:#fff; position: absolute; top: 0; border: 0; right: 0;"><path d="M0,0 L115,115 L130,115 L142,142 L250,250 L250,0 Z"></path><path d="M128.3,109.0 C113.8,99.7 119.0,89.6 119.0,89.6 C122.0,82.7 120.5,78.6 120.5,78.6 C119.2,72.0 123.4,76.3 123.4,76.3 C127.3,80.9 125.5,87.3 125.5,87.3 C122.9,97.6 130.6,101.9 134.4,103.2" fill="currentColor" style="transform-origin: 130px 106px;" class="octo-arm"></path><path d="M115.0,115.0 C114.9,115.1 118.7,116.5 119.8,115.4 L133.7,101.6 C136.9,99.2 139.9,98.4 142.2,98.6 C133.8,88.0 127.5,74.4 143.8,58.0 C148.5,53.4 154.0,51.2 159.7,51.0 C160.3,49.4 163.2,43.6 171.4,40.1 C171.4,40.1 176.1,42.5 178.8,56.2 C183.1,58.6 187.2,61.8 190.9,65.4 C194.5,69.0 197.7,73.2 200.1,77.6 C213.8,80.2 216.3,84.9 216.3,84.9 C212.7,93.1 206.9,96.0 205.4,96.6 C205.1,102.4 203.0,107.8 198.3,112.5 C181.9,128.9 168.3,122.5 157.7,114.1 C157.9,116.9 156.7,120.9 152.7,124.9 L141.0,136.5 C139.8,137.7 141.6,141.9 141.8,141.8 Z" fill="currentColor" class="octo-body"></path></svg></a><style>.github-corner:hover .octo-arm{animation:octocat-wave 560ms ease-in-out}@keyframes octocat-wave{0%,100%{transform:rotate(0)}20%,60%{transform:rotate(-25deg)}40%,80%{transform:rotate(10deg)}}@media (max-width:500px){.github-corner:hover .octo-arm{animation:none}.github-corner .octo-arm{animation:octocat-wave 560ms ease-in-out}}</style>
<table>
<tbody>
<tr>
<td width="650">
<div id="toptitle">
<h1 style="color: blue">S. M. Kamrul Hasan</h1><h1>
</h1></div>
<h3 style="color: black">Ph.D Student</h3>
<p style="font-family: sans-serif;padding-bottom:30px" >
Biomedical Modeling, Visualization & Image-guided Navigation (BMVIGN) Lab <br>
Center for Imaging Science<br>
Rochester Institute of Technology <br>
Rochester, New York, USA<br>
<br>
Email: [email protected]
</p>
<p> <!--<a href="https://scholar.google.com/citations?user=bRe3FlcAAAAJ&hl=en"><img src="./pic/google_scholar.png" height="30px" style="margin-bottom:-3px"></a>-->
<a href="https://scholar.google.com/citations?user=M7XmUK0AAAAJ&hl=en#"><img src="./images/gs-logo.png" height="30px" style="margin-bottom:-3px"></a>
<a href="https://github.com/SMKamrulHasan"><img src="./pic/github_s.jpg" height="30px" style="margin-bottom:-3px"></a>
<a href="https://www.researchgate.net/profile/S_M_Kamrul_Hasan7"><img src="./pic/rg.png" height="30px" style="margin-bottom:-3px"></a>
<a href="https://www.linkedin.com/in/s-m-kamrul-hasan/"><img src="./images/linkedin-logo.png" height="30px" style="margin-bottom:-3px"></a>
<!--
<a href="https://www.linkedin.com/in/lequan-yu-124811a2"><img src="./pic/LinkedIn_s.png" height="30px" style="margin-bottom:-3px"></a>
<a href="https://zh-cn.facebook.com/people/Lequan-Yu/100003696557697"><img src="./pic/Facebook_s.png" height="30px" style="margin-bottom:-3px"></a>
-->
</p>
</td>
<td>
<img src="./pic/git_image.jpg" border="0" width="260"><br>
</td>
</tr><tr>
</tr></tbody>
</table>
<!--<h2>Biography [<a href="./CV-JinYueming.pdf">CV</a>]</h2>-->
<h2 style="color: black;">Biography </h2>
<p style="padding-bottom:30px;text-align:justify" >
I am a fourth year Ph.D. student in <a href="https://www.cis.rit.edu/" style="color:blue" >Chester F. Carlson Center for Imaging Science</a> at Rochester Institute of Technology (RIT), Rochester, NY where I work on medical image analysis using machine learning.
</p>
In Fall 2020, I worked as a Machine Learning Research Intern at <a href="https://www.ibm.com/us-en/" style="color:blue" >IBM Research</a> in California where I've worked on deep neural network pruning/optimization for better explainable AI.
</p>
I am currently working in the Biomedical Modeling, Visualization and Image-guided Navigation Lab (a.k.a. BiMVisIGN) under the direction of my advisor, <a href="https://www.rit.edu/directory/calbme-cristian-linte" style="color:blue">Dr. Cristian Linte </a> and funded by both <a href="https://www.nsf.gov/awardsearch/showAward?AWD_ID=1808530&HistoricalAwards=false" style="color:blue">NSF and NIH grants </a>. Previously, I earned a bachelors in Electrical and Electronic (EE) Engineering from <a href="http://www.kuet.ac.bd/" style="color:blue" >Khulna University of Engineering & Technology (KUET)</a>, Bangladesh in 2015 and worked as a Lecturer in the Department of Computer Science and Engineering at <a href="https://daffodilvarsity.edu.bd/" style="color:blue" >Daffodil International University</a>, Bangladesh until 2017</a>.
</p>
<h2 style="color: black;">Research </h2>
<p style="padding-bottom:30px;text-align:justify" >My research focuses broadly on analyzing the medical images to enable more accurate segmentation, disease detection, and clinical parameter estimation, and allow more precisely tailored treatment plans, and ultimately improve patient outcomes, through the innovative use of Generative Models, Disentangled Representation Learning and Model Optimization in Deep Learning, Computer Vision, and Artificial Intelligence (AI). </p>
<h2 style="color: black;">News</h2>
<ul> <li> <a style="color:blue;">[Nov. - Dec. 2020]</a> Started Winter School on Cardiac Simulation 2020 at Center for Computational Medicine in Cardiology, Switzerland</a>.</li>
<li> <a style="color:blue;">[Oct. 2020]</a> Paper got accepted at <a href="https://spie.org/conferences-and-exhibitions/medical-imaging?SSO=1" style="color:blue" > SPIE 2021,</a> San Diego, California.</li>
<li> <a style="color:blue;">[Aug. 2020]</a> Received MICCAI Student Award as a part of NSF grant</a>.</li>
<li> <a style="color:blue;">[Aug. 2020]</a> Started Research Internship at <a href="https://www.research.ibm.com/" style="color:blue"> IBM, </a> Almaden Research Center, San Jose, California.</li>
<li> <a style="color:blue;">[May. 2020]</a> Accepted offer as Research Intern at <a href="https://www.ibm.com/us-en/" style="color:blue"> IBM </a>, San Jose, California.</li>
<li> <a style="color:blue;">[Apr. 2020]</a> One paper get accepted for oral presentation at EMBC 2020</a>.</li>
<li> <a style="color:blue;">[Apr. 2020]</a> Presented paper at ISBI 2020</a>.</li>
<li> <a style="color:blue;">[Feb. 2020]</a> Reviewer for MICCAI 2020.</a>.</li>
<li> <a style="color:blue;">[Feb. 2020]</a> Presented paper at SPIE Medical Imaging 2020</a>.</li>
<li> <a style="color:blue;">[Feb. 2020]</a> One paper get accepted at ISBI 2020 WOrkshop</a>.</li>
<li> <a style="color:blue;">[Nov. 2019]</a> U-NetPlus paper accepted for oral presentation at RIT Graduate Showcase 2019</a>.</li>
<li> <a style="color:blue;">[Oct. 2019]</a> One paper get accepted at SPIE Medical Imaging 2020</a>.</li>
<li> <a style="color:blue;">[Apr. 2019]</a> One paper get accepted at EMBC 2019</a>.</li>
<!-- <li>
[12/2019] Paper on unpaired multi-modal learning was accepted by IEEE TMI.
</li> -->
</ul>
<h2 style="color: black;" >Publications</h2>
<table id="tbPublications" width="100%">
<tbody>
<tr>
<td width="270">
<img src="./indexpics/hasan8gf.gif" alt="this slowpoke moves" width="250" alt="404 image"/>
<img src="./indexpics/kamrul2.png" width="250px" style="box-shadow: 4px 4px 8px #888" class="center">
<img src="./indexpics/hasan6gf.gif" alt="this slowpoke moves" width="250" alt="405 image"/>
</td style="text-align:justify">
<td> Segmentation and removal of surgical instruments for background scene visualization from Endoscopic / Laparoscopic video. <br>
<strong>S. M. Kamrul Hasan</strong>, Richard A. Simon, and Cristian A. Linte. <br>
<em>SPIE Medical Imaging</em>, 2021,
<strong>oral</strong>
<p>[<a href="https://www.researchgate.net/publication/346026748_Segmentation_and_Removal_of_Surgical_Instruments_for_Background_Scene_Visualization_from_Endoscopic_Laparoscopic_Video" style="color:blue" >paper</a>][<a href="https://github.com/SMKamrulHasan/Video_inpainting" style="color:blue" >code</a>][<a href="https://endovis.grand-challenge.org/" style="color:blue" >dataset</a>][<a href="papers/2020/iteravg/ref.bib" style="color:blue" >bibtex</a>][<a href="https://youtu.be/msCTpTJr_wY" style="color:blue" >Video</a>]</p>
<p style="padding-bottom:30px;text-align:justify">
In this work, we implement a fully convolutional segmenter featuring both a learned group structure and a regularized weight-pruner to reduce the high computational cost in volumetric image segmentation. We validated our framework on the ACDC dataset featuring one healthy and four pathology groups imaged throughout the cardiac cycle. Our technique achieved Dice scores of 96.80% (LV blood-pool), 93.33% (RV blood-pool) and 90.0% (LV Myocardium) with five-fold cross-validation and yielded similar clinical parameters as those estimated from the ground truth segmentation data. Based on these results, this technique has the potential to become an efficient and competitive cardiac image segmentation tool that may be used for cardiac computer-aided diagnosis, planning and guidance applications.
</p>
</td>
</tr>
<tr> </tr>
<tr> </tr>
<tr> </tr>
<tr>
<td width="270">
<img src="./indexpics/RESULT2.png" width="250px" style="box-shadow: 4px 4px 8px #888" class="center">
<img src="./indexpics/PLOT1.png" width="250px" style="box-shadow: 4px 4px 8px #888" class="center">
</td style="text-align:justify">
<td> L-CO-Net: Learned Condensation-Optimization Network for Clinical Parameter Estimation from Cardiac Cine MRI. <br>
<b><strong>S. M. Kamrul Hasan</strong></b>, and Cristian A. Linte. <br>
<em> International Conference of the Engineering in Medicine & Biology Society (EMBC)</em>, 2020,
<strong>oral</strong>
<p>[<a href="https://ieeexplore.ieee.org/document/9176491" style="color:blue" >paper</a>][<a href="https://github.com/SMKamrulHasan/Regularized-Network" style="color:blue" >code</a>][<a href="https://www.creatis.insa-lyon.fr/Challenge/acdc/databases.html" style="color:blue" >dataset</a>][<a href="papers/2020/iteravg/ref.bib" style="color:blue" >bibtex</a>]</p>
<p style="padding-bottom:30px;text-align:justify">
In this work, we implement a fully convolutional segmenter featuring both a learned group structure and a regularized weight-pruner to reduce the high computational cost in volumetric image segmentation. We validated our framework on the ACDC dataset featuring one healthy and four pathology groups imaged throughout the cardiac cycle. Our technique achieved Dice scores of 96.8% (LV blood-pool), 93.3% (RV blood-pool) and 90.0% (LV Myocardium) with five-fold cross-validation and yielded similar clinical parameters as those estimated from the ground truth segmentation data. Based on these results, this technique has the potential to become an efficient and competitive cardiac image segmentation tool that may be used for cardiac computer-aided diagnosis, planning, and guidance applications.
</p>
</td>
</tr>
<tr> </tr>
<tr> </tr>
<tr> </tr>
<tr>
<td width="270">
<img src="./indexpics/model1.png" width="250px" style="box-shadow: 4px 4px 8px #888" class="center">
<img src="./indexpics/RESULT3.png" width="250px" style="box-shadow: 4px 4px 8px #888" class="center">
<img src="./indexpics/heart_main.gif" alt="this slowpoke moves" width="250" alt="405 image"/>
</td style="text-align:justify">
<td> Learned Condensation-Optimization Network: A regularized Network for improved Cardiac Ventricles Segmentation on Breath-Hold Cine MRI. <br>
<b><strong>S. M. Kamrul Hasan</strong></b>, and Cristian A. Linte. <br>
<em> International Symposium on Biomedical Imaging (ISBI)</em>, 2020,
<strong>oral</strong>
<p>[<a href="https://www.researchgate.net/publication/340595489_Learned_Condensation-Optimization_Network_A_regularized_Network_for_improved_Cardiac_Ventricles_Segmentation_on_Breath-Hold_Cine_MRI" style="color:blue" >paper</a>][<a href="https://github.com/SMKamrulHasan/Regularized-Network" style="color:blue" >code</a>][<a href="https://www.creatis.insa-lyon.fr/Challenge/acdc/databases.html" style="color:blue" >dataset</a>][<a href="papers/2020/iteravg/ref.bib" style="color:blue" >bibtex</a>]</p>
<p style="padding-bottom:30px;text-align:justify">
In this work, we implement a fully convolutional segmenter featuring both a learned group structure and a regularized weight-pruner to reduce the high computational cost in volumetric image segmentation. We validated the framework on the ACDC dataset and achieved accurate segmentation, leading to mean Dice scores of 96.80% (LV blood-pool), 93.33% (RV blood-pool), 90.0% (LV Myocardium) and yielded similar clinical parameters as those estimated from the ground-truth segmentation data.
</p>
</td>
</tr>
<tr> </tr>
<tr> </tr>
<tr> </tr>
<!-- <tr>
<td width="270">
<img src="./indexpics/tmi20_cxr.png" width="250px" style="box-shadow: 4px 4px 8px #888" class="center">
</td>
<td> Deep Mining External Imperfect Data for Chest X-ray Disease Screening. <br>
Luyang Luo, Lequan Yu, Hao Chen, <b>Quande Liu</b>, Xi Wang, Jiaqi Xu, Pheng-Ann Heng. <br>
<em>IEEE Transactions on Medical Imaging (TMI)</em>, 2020.
<p>[<a href="https://arxiv.org/pdf/2006.03796.pdf">paper</a>]</p>
</td>
</tr>
<tr> </tr>
<tr> </tr>
<tr> </tr>
-->
<tr>
<td width="270">
<img src="./indexpics/5.png" width="250px" style="box-shadow: 4px 4px 8px #888" >
<img src="./indexpics/SPIE_2020_result.png" width="250px" style="box-shadow: 4px 4px 8px #888" >
</td style="text-align:justify">
<td> CondenseUNet: a memory-efficient condensely-connected architecture for bi-ventricular blood pool and myocardium segmentation. <br>
<b><strong>S. M. Kamrul Hasan</strong></b>, and Cristian A. Linte. <br>
<em>SPIE Medical Imaging</em>, 2020,
<strong>oral</strong>
<p>[<a href="https://www.spiedigitallibrary.org/conference-proceedings-of-spie/11315/113151J/CondenseUNet---a-memory-efficient-condensely-connected-architecture-for/10.1117/12.2550640.short?SSO=1" style="color:blue" >paper</a>][<a href="https://github.com/SMKamrulHasan/CondenseUNet" style="color:blue" >code</a>][<a href="https://www.creatis.insa-lyon.fr/Challenge/acdc/databases.html" style="color:blue" >dataset</a>][<a href="papers/2020/iteravg/ref.bib" style="color:blue" >bibtex</a>]</p>
<p style="padding-bottom:30px;text-align:justify">
With the advent of Cardiac Cine Magnetic Resonance (CMR) Imaging, there has been a paradigm shift in medical technology, thanks to its capability of imaging different structures within the heart without ionizing radiation. However, it is very challenging to conduct pre-operative planning of minimally invasive cardiac procedures without accurate segmentation and identification of the left ventricle (LV), right ventricle (RV) blood-pool, and LV-myocardium. Manual segmentation of those structures, nevertheless, is time-consuming and often prone to error and biased outcomes. Hence, automatic and computationally efficient segmentation techniques are paramount. In this work, we propose a novel memory-efficient Convolutional Neural Network (CNN) architecture as a modification of both CondenseNet, as well as DenseNet for ventricular blood-pool segmentation by introducing a bottleneck block and an upsampling path. Our experiments show that the proposed architecture runs on the Automated Cardiac Diagnosis Challenge (ACDC) dataset using half (50%) the memory requirement of DenseNet and one-twelfth (∼ 8%) of the memory requirements of U-Net, while still maintaining excellent accuracy of cardiac segmentation. We validated the framework on the ACDC dataset featuring one healthy and four pathology groups whose heart images were acquired throughout the cardiac cycle and achieved the mean dice scores of 96.78% (LV blood-pool), 93.46% (RV blood-pool) and 90.1% (LVMyocardium). These results are promising and promote the proposed methods as a competitive tool for cardiac image segmentation and clinical parameter estimation that has the potential to provide fast and accurate results, as needed for pre-procedural planning and / or pre-operative applications.
</p>
</tr>
<tr> </tr>
<tr> </tr>
<tr> </tr>
<tr>
<td width="270">
<img src="./indexpics/cinc.png" width="250px" style="box-shadow: 4px 4px 8px #888" class="center">
</td style="text-align:justify">
<td> Toward Quantification and Visualization of Active Stress Waves for Myocardial Biomechanical Function Assessment. <br>
Niels F Otani, Dylan Dang, Christopher Beam, Fariba Mohammadi, Brian Wentz, <strong>S. M. Kamrul Hasan</strong>, Suzanne M Shontz, Karl Q Schwarz, Sabu Thomas, and Cristian A. Linte. <br>
<em>Computing in Cardiology (CinC)</em>, 2019.
<p>[<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7373340/" style="color:blue" >paper</a>][<a href="https://github.com/SMKamrulHasan" style="color:blue" >code</a>][<a href="" style="color:blue" >dataset</a>][<a href="papers/2020/iteravg/ref.bib" style="color:blue" >bibtex</a>]</p>
<p style="padding-bottom:30px;text-align:justify">
Estimating and visualizing myocardial active stress wave patterns is crucial to understanding the mechanical activity of the heart and provides a potential non-invasive method to assess myocardial function. These patterns can be reconstructed by analyzing 2D and/or 3D tissue displacement data acquired using medical imaging. Here we describe an application that utilizes a 3D finite element formulation to reconstruct active stress from displacement data. As a proof of concept, a simple cubic mesh was used to represent a myocardial tissue “sample” consisting of a 10 x 10 x 10 lattice of nodes featuring different fiber directions that rotate with depth, mimicking cardiac transverse isotropy. In the forward model, tissue deformation was generated using a test wave with active stresses that mimic the myocardial contractile forces. The generated deformation field was used as input to an inverse model designed to reconstruct the original active stress distribution. We numerically simulated malfunctioning tissue regions (experiencing limited contractility and hence active stress) within the healthy tissue. We also assessed model sensitivity by adding noise to the deformation field generated using the forward model. The difference image between the original and reconstructed active stress distribution suggests that the model accurately estimates active stress from tissue deformation data with a high signal-to-noise ratio.
</p>
</td>
</tr>
<tr> </tr>
<tr> </tr>
<tr> </tr>
<tr>
<td width="270">
<img src="./indexpics/EMBC_2019_result.png" width="250px" style="box-shadow: 4px 4px 8px #888" class="center">
<img src="./indexpics/EMBC_2019_result2.png" width="250px" style="box-shadow: 4px 4px 8px #888" class="center">
<img src="./indexpics/EMBC_2019_result3.png" width="250px" style="box-shadow: 4px 4px 8px #888" class="center">
<img src="./indexpics/parts.gif" alt="this slowpoke moves" width="250" alt="404 image"/>
</td style="text-align:justify">
<td> U-NetPlus: A Modified Encoder-Decoder U-Net Architecture for Semantic and Instance Segmentation of Surgical Instruments from Laparoscopic Images. <br>
<strong>S. M. Kamrul Hasan</strong>, and Cristian A. Linte. <br>
<em>International Conference of the IEEE Engineering in Medicine and Biology (EMBC)</em>, 2020,
<strong>oral</strong>
<p>[<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7372295/" style="color:blue" >paper</a>][<a href="unetplus.github.io" style="color:blue" >code</a>][<a href="https://endovis.grand-challenge.org/" style="color:blue" >dataset</a>][<a href="papers/2020/iteravg/ref.bib" style="color:blue" >bibtex</a>]</p>
<p style="padding-bottom:30px;text-align:justify">
With the advent of robot-assisted surgery, there has been a paradigm shift in medical technology for minimally invasive surgery. However, it is very challenging to track the position of the surgical instruments in a surgical scene, and accurate detection & identification of surgical tools is paramount. Deep learning-based semantic segmentation in frames of surgery videos has the potential to facilitate this task. In this work, we modify the U-Net architecture by introducing a pre-trained encoder and re-design the decoder part, by replacing the transposed convolution operation with an upsampling operation based on nearest-neighbor (NN) interpolation. To further improve performance, we also employ a very fast and flexible data augmentation technique. We trained the framework on 8 x 225 frame sequences of robotic surgical videos available through the MICCAI 2017 EndoVis Challenge dataset and tested it on 8 x 75 frame and 2 x 300 frame videos. Using our U-NetPlus architecture, we report a 90.20\% DICE for binary segmentation, 76.26% DICE for instrument part segmentation, and 46.07% for instrument type (i.e., all instruments) segmentation, outperforming the results of previous techniques implemented and tested on these data.
</p>
</td>
</tr>
<tr> </tr>
<tr> </tr>
<tr> </tr>
<tr>
<td width="270">
<img src="./indexpics/WNYISPW_2018_model.png" width="250px" style="box-shadow: 4px 4px 8px #888" class="center">
<img src="./indexpics/WNYISPW_2018_nnret.png" width="250px" style="box-shadow: 4px 4px 8px #888" class="center">
</td style="text-align:justify">
<td> A Modified U-Net Convolutional Network Featuring a Nearest-neighbor Re-sampling-based Elastic-Transformation for Brain Tissue Characterization and Segmentation. <br>
<strong>S. M. Kamrul Hasan</strong>, and Cristian A. Linte. <br>
<em> Western New York Image and Signal Processing Workshop (WNYISPW)</em>, 2018,
<strong>oral</strong>
<p>[<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6583803/" style="color:blue" >paper</a>][<a href="https://github.com/SMKamrulHasan/" style="color:blue" >code</a>][<a href="https://www.med.upenn.edu/sbia/brats2017/data.html" style="color:blue" >dataset</a>][<a href="papers/2020/iteravg/ref.bib" style="color:blue" >bibtex</a>]</p>
<p style="padding-bottom:30px;text-align:justify">
Brain tumor detection through Magnetic Resonance Imaging (MRI) is a very challenging task even in today's modern medical image processing research. Expert Neuro-radiologists diagnose even glioblastoma types deadly brain cancer using manual segmentation which is tedious and even not accurate that much. Deep learning models like U-net deep convolution neural networks have been widely used in biomedical image segmentation. Though this model works better on BRATS 2015 dataset by using pixel-wise segmentation map of the input image like an auto-encoder which assures best segmentation accuracy, but it is not correct for all the cases. So, I have planned to improve this U-net model by replacing the de-convolution part with the upsampled by Nearest-neighbor algorithm and also by using elastic transformation for increasing the training dataset to make the model more robust on Low graded tumor. I had trained my NNRET U-net model on BRATS 2017 dataset and got a better performance than the state of the art classic U-net model.
</p>
</td>
</tr>
<tr> </tr>
<tr> </tr>
<tr> </tr>
</tbody></table>
<h2 style="color: black;">Professional Activities</h2>
<ul>
<li style="padding-bottom:30px;color: black;">
<b>Research Intern 2020</b><br>
<a href="https://www.ibm.com/us-en/" style="color:blue" > IBM</a>, San Jose, California<br>
Interpretable AI for Deep Neural Network Optimization<br>
<!-- IEEE Winter Conference on Applications of Computer Vision (WACV) 2020 <br> -->
</li>
<li>
<b>Research Assistant</b><br>
RIT Biomedical Modeling, Visualization and Image-guided Navigation Lab <br>
Advisor: <a href="https://www.rit.edu/directory/calbme-cristian-linte" style="color:blue" > Cristian A. Linte, Ph.D</a><br>
<li>
Quantification of Clinical parameters for predicting the clinical decision of having heartattack from the segmentation results obtained from our novel, and memory-efficient (only0.34 million parameters) architecture as well as generative models (GANs, VAEs) in bothsupervised and semi-supervised manner<br>
<p style="margin-top:3px"></p>
</li>
</ul>
<!-- <h2>Patent</h2>
<ul>
<li>
<a href="https://patentscope2.wipo.int/search/en/detail.jsf?docId=WO2017005591">Method and device for detecting pulmonary nodule in computed tomography image, and computer-readable storage medium</a>.<br>
Qi Dou, <b>Quande Liu</b>, Hao Chen.<br>
US Patent US20200005460A1, 2018.<br>
</li>
</ul> -->
<!-- <h2>Selected Honors & Awards</h2>
<table style="border-spacing:2px">
<tbody>
<tr><td> Microsoft Research Asia (MSRA) PhD Fellowship Nomination Award (2020) </td></tr>
<tr><td> Outstanding Graduates Award, Zhejiang Province (2018)</td></tr>
<tr><td> Zhejiang University Scholarship (2015-2017)</td></tr>
<tr><td> Runner-up in 11th Robot Competition, Zhejiang University, 2017</td></tr>
<tr><td> Second-class Scholarship for Outstanding Students, 2015-2017</td></tr>
<tr><td> Second-class Academic Scholarship, 2015-2017</td></tr>
<tr><td> Admission to Chu Kochen Honors College, Zhejiang University (2014)</td></tr>
</tbody>
</table> -->
<h2 style="color: black;">Honors & Awards</h2>
<ul>
<li>
<tr><td> MICCAI student travel award as a part of NSF Grant (2020) </td></tr>
</li>
<li>
<tr><td> <a href="https://ewh.ieee.org/r1/rochester/sp/WNYISPW2018.html" style="color:blue" > Best paper award </a>, Western New York Image and Signal Processing Workshop (2018)</td></tr>
</li>
<li>
<tr><td> Imagine Festival RIT Award from KODAK (2017)</td></tr>
</li>
<li>
<tr><td>RIT Graduate Scholarship (2017)</td></tr>
</li>
<li>
<tr><td>Awarded for achieving GPAs of 3.85∼4.0 in total of six out of eight semesters (2012-2015)</td></tr>
</li>
<!-- <tr><td> Runner-up in 11th Robot Competition, Zhejiang University, 2017</td></tr> -->
<!-- <tr><td> Second-class Scholarship for Outstanding Students, 2015-2017</td></tr> -->
<!-- <tr><td> Second-class Academic Scholarship, 2015-2017</td></tr> -->
</ul>
<h2 style="color: black;">Reviewing</h2>
<ul>
<li style="padding-bottom:30px;" >
<b>Conference Reviews:</b><br>
NeurIPS 2020 <br>
MICCAI 2020<br>
IEEE Access 2019<br>
IJCARS 2020<br>
IPCAI 2020<br>
<!-- IEEE Winter Conference on Applications of Computer Vision (WACV) 2020 <br> -->
</li>
<p style="margin-top:3px"></p>
</ul>
<!-- <h2>Teaching</h2>
<table id="tbTeaching" border="0" width="100%">
<tbody>
<tr>
<td> 2019-2020</td><td>Spring</td><td>Principles of Programming Languages (CSCI 3180)</td>
</tr>
<tr>
<td> 2019-2020</td><td>Fall</td><td>Problem Solving by Programming (ENGG 1110)</td>
</tr>
<tr>
<td> 2018-2019</td><td>Spring</td><td>Problem Solving by Programming (ENGG 1110)</td>
</tr>
<tr>
<td> 2018-2019</td><td>Fall</td><td>Digital Logic and Systems (ENGG 2020)</td>
</tr>
</tbody>
</table> -->
<!--
<h2>Experience</h2>
<li>
Research Assistant,   08. 2015 - Now
<p></p>
<p>    The Chinese University of Hong Kong</p>
<p></p>
<p>    Advisor: Pheng Ann Heng</p>
<p style="margin-top:3px">
</p>
</li>
<li>
Software Engineering Intern,   01. 2015 - 04. 2015
<p></p>
<p>    Epiclouds, a startup company in Hangzhou</p>
<p></p>
<p style="margin-top:3px">
</p>
</li>
-->
<div id="footer">
<div id="footer-text"></div>
</div>
<p><center>
<div id="clustrmaps-widget" style="width:40%">
<script type="text/javascript" id="clustrmaps" src="//cdn.clustrmaps.com/map_v2.js?d=LJNWkxUAFjdgZdHhjWvEOF1K9cIg45om0jzghCyXpkc&cl=ffffff&w=a"></script>
</div>
<br>
© S. M. Kamrul Hasan | Last updated: Sep 2020
</center></p>
</div>
</body></html>