DenseSegNet : Densely Separable Network for Cell Nuclei Segmentation With Multi‐Scale Feature Fusion and Adaptive Attention Mechanisms
Citations Over TimeTop 10% of 2025 papers
Abstract
ABSTRACT In automated diagnostics and biomedical research, accurate cell nuclei segmentation is essential, but it is still challenging because of differences in cell size, complicated backgrounds, and overlapping nuclei. We introduce DenseSegNet, an advanced deep learning (DL) framework meant to solve these problems by raising segmentation accuracy and resilience. DenseSegNet improves feature extraction and multi‐scale fusion using dense separable blocks, a Spatial‐Channel Attention Residual Block (SCARB), and a Bidirectional Feature Pyramid Network (BiFPN). Beginning with a convolutional encoder that effectively extracts hierarchical features using dense separable blocks, the network moves. By amplifying pertinent information, and so, reducing noise, the SCARB module enhances spatial and channel‐wise feature representations. As such, the BiFPN guarantees efficient semantic representation at several feature levels by supporting bidirectional multi‐scale feature refining. High‐resolution segmentation maps are produced in the decoder phase by feature concatenation of obtained elements and upsampling. DenseSegNet is assessed against a benchmark—the 2018 Data Science Bowl (DSB) dataset. We investigated several optimizers and loss functions to assess the architecture and learn its strength and adaptability fully. With a dice coefficient of 93.8%, an Intersection over Union (IoU) of 88.2%, a precision of 96.7%, and a recall of 91.8%, the results highlight the beneficial segmentation ability of DenseSegNet.