000 05193nam a22003017a 4500
001 th640
003 ISI Library, Kolkata
005 20250828125623.0
008 250828b |||||||| |||| 00| 0 eng d
040 _aISI Library
_bEnglish
082 0 4 _223rd
_a616.0754
_bM281
100 1 _aManna, Siladitta
_eauthor
245 1 0 _aSelf-supervised learning and its applications in medical image analysis/
_cSiladitta Manna
260 _aKolkata:
_bPal, Umapada
_c2025
300 _a217 pages,
502 _aThesis (Ph.D) - Indian Statistical Institute, 2025
504 _aIncludes bibliography
505 0 _aIntroduction -- Literature Survey -- Context-based self-supervised learning for medical image analysis -- Self-supervised contrastive pre-training on medical images -- Self-supervised learning by optimizing mutual information -- Dynamic temperature hyper-parameter scaling in self-supervised contrastive learning -- Self-supervised learning for medical image segmentation using prototype aggregation -- Conclusion and future directions
508 _aGuided by Prof. Umapada Pal
520 _aSelf-supervised learning (SSL) enables learning robust representations from unlabeled data and it consists of two stages: pretext and downstream. The representations learnt in the pretext task are transferred to the downstream task. Self-supervised learning has appli- cations in various domains, such as computer vision tasks, natural language processing, speech and audio processing, etc. In transfer learning scenarios, due to differences in the data distribution of the source and the target data, the hierarchical co-adaptation of the representations is destroyed, and hence proper fine-tuning is required to achieve satisfactory performance. With self-supervised pre-training, it is possible to learn repre- sentations aligned with the target data distribution, thereby making it easier to fine-tune the parameters in the downstream task in the data-scarce medical image analysis domain. The primary objective of this thesis is to propose self-supervised learning frameworks that deal with specific challenges. Initially, jigsaw puzzle-solving strategy-based frameworks are devised where a semi-parallel architecture is used to decouple the representations of patches of a slice from a magnetic resonance scan to prevent learning of low-level signals and to learn context-invariant representations. The literature shows that contrastive learn- ing tasks are better than context-based tasks in learning representations. Thus, we propose a novel binary contrastive learning framework based on classifying a pair as positive or neg- ative. We also investigate the ability of self-supervised pre-training to boost the quality of transferable representations. To effectively control the uniformity-alignment trade-off, we re-formulate the binary contrastive framework from a variational perspective. We further improve this vanilla formulation by eliminating positive-positive repulsion and amplifying negative-negative repulsion. The reformulated binary contrastive learning framework out- performs the state-of-the-art contrastive and non-contrastive frameworks on benchmark datasets. Empirically, we observe that the temperature hyper-parameter plays a signifi- cant role in controlling the uniformity-alignment trade-off, consequently determining the downstream performance. Hence, we derive a form of the temperature function by solving a first-order differential equation obtained from the gradient of the InfoNCE loss with respect to the cosine similarity of a negative pair. This enables controlling the uniformity- alignment trade-off by computing an optimal temperature for each sample pair. From experimental evidence, we observe that the proposed temperature function improves the performance of a weak baseline framework to outperform the state-of-the-art contrastive and non-contrastive frameworks. Finally, to maximise the transferability of representa- tions, we propose a self-supervised few-shot segmentation pretext task to minimise the disparity between the pretext and downstream tasks. Using the Felzenszwalb-based seg- mentation method to generate the pseudo-masks, we train a segmentation network that learns representations aligned with the downstream task of one-shot segmentation. We propose a correlation-weighted prototype aggregation step to incorporate contextual in- formation efficiently. In the downstream task, we conduct inference without fine-tuning and the proposed self-supervised one-shot framework performs better or at par with the contemporary self-supervised segmentation frameworks. In conclusion, the proposed self-supervised learning frameworks offer significant improve- ments in representation learning, and enhancing performance on downstream medical im- age analysis tasks, as observed from the different experimental results of the thesis.
650 4 _aSelf-Supervised Learning
650 4 _aMedical Image Analysis
650 4 _aContrastive Learning
650 4 _aJigsaw Puzzle Solving
650 4 _aOne-Shot Segmentation
856 _uhttps://dspace.isical.ac.in/jspui/handle/10263/7554
_yFull text
942 _2ddc
_cTH
999 _c437227
_d437227