Overview

The radiology processing pipeline in HoneyBee handles various medical imaging modalities, including Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET). Each modality presents unique characteristics that require specialized preprocessing techniques.

Radiology Processing Pipeline

Key Features

  • Support for DICOM and NIfTI formats
  • Metadata preservation (acquisition parameters, patient information, etc.)
  • Anatomical segmentation and region-of-interest (ROI) analysis
  • Denoising and artifact reduction
  • Spatial standardization and resampling
  • Intensity normalization
  • Embedding generation using specialized models

Radiological Data Management

HoneyBee supports standard medical imaging formats with metadata preservation:

optional_filename.py

from honeybee.processors import RadiologyProcessor

# Initialize the radiology processor
processor = RadiologyProcessor()

# Load DICOM series
dicom_series = processor.load_dicom("path/to/dicom_folder/")

# Or load NIfTI file
nifti_image = processor.load_nifti("path/to/image.nii.gz")

# Access metadata
metadata = dicom_series.metadata

Anatomical Segmentation and Masking

Isolate relevant anatomical structures for targeted analysis:

optional_filename.py

from honeybee.processors import RadiologyProcessor

processor = RadiologyProcessor()
ct_scan = processor.load_dicom("path/to/ct_scan/")

# Lung segmentation for CT
lung_mask = processor.segment_lungs(ct_scan)

# Multi-organ segmentation
organs = processor.segment_organs(ct_scan)
liver_mask = organs['liver']
spleen_mask = organs['spleen']

# Tumor segmentation in MRI
mri_scan = processor.load_dicom("path/to/mri_scan/")
tumor_mask = processor.segment_tumor(mri_scan)

# PET metabolic volume delineation
pet_scan = processor.load_dicom("path/to/pet_scan/")
suv_threshold = 2.5  # Standardized Uptake Value threshold
metabolic_mask = processor.segment_metabolic_volume(pet_scan, threshold=suv_threshold)

Denoising and Artifact Reduction

Improve image quality through denoising and artifact reduction:

optional_filename.py

from honeybee.processors import RadiologyProcessor

processor = RadiologyProcessor()
ct_scan = processor.load_dicom("path/to/ct_scan/")

# Non-local means denoising for CT
denoised_ct = processor.denoise(ct_scan, method="nlm")

# Deep learning-based denoising
denoised_ct_dl = processor.denoise(ct_scan, method="deep")

# Metal artifact reduction
mar_ct = processor.reduce_metal_artifacts(ct_scan)

# MRI-specific denoising
mri_scan = processor.load_dicom("path/to/mri_scan/")
denoised_mri = processor.denoise(mri_scan, method="rician")  # Rician noise model for MRI

# PET denoising
pet_scan = processor.load_dicom("path/to/pet_scan/")
denoised_pet = processor.denoise(pet_scan, method="pet_specific")

Spatial Standardization and Resampling

Standardize spatial resolution and orientation:

optional_filename.py

from honeybee.processors import RadiologyProcessor

processor = RadiologyProcessor()
ct_scan = processor.load_dicom("path/to/ct_scan/")

# Isotropic resampling
resampled_ct = processor.resample(ct_scan, spacing=(1.0, 1.0, 1.0))  # 1mm isotropic

# Reorient to standard orientation (RAS: Right-Anterior-Superior)
standardized_ct = processor.reorient(resampled_ct, orientation="RAS")

# Registration to atlas
atlas = processor.load_atlas("path/to/atlas.nii.gz")
registered_ct = processor.register(standardized_ct, atlas)

# Crop to region of interest
cropped_ct = processor.crop_to_roi(registered_ct, roi_mask)

Intensity Normalization

Standardize signal intensities across different scanners and protocols:

optional_filename.py

from honeybee.processors import RadiologyProcessor

processor = RadiologyProcessor()

# CT Hounsfield unit verification
ct_scan = processor.load_dicom("path/to/ct_scan/")
verified_ct = processor.verify_hounsfield_units(ct_scan)

# CT window/level adjustment
window_ct = processor.apply_window(verified_ct, window=400, level=50)  # Soft tissue window

# MRI intensity normalization
mri_scan = processor.load_dicom("path/to/mri_scan/")
normalized_mri = processor.normalize_intensity(mri_scan, method="z_score")

# Bias field correction for MRI
bias_corrected_mri = processor.correct_bias_field(mri_scan)

# PET SUV calculation
pet_scan = processor.load_dicom("path/to/pet_scan/")
suv_pet = processor.calculate_suv(
    pet_scan,
    patient_weight=70,  # in kg
    injected_dose=10,   # in mCi
    injection_time="20220101T120000"  # ISO format
)

Embedding Generation

Generate embeddings from medical images using specialized models:

optional_filename.py

from honeybee.processors import RadiologyProcessor

# Initialize processor with specific model
processor = RadiologyProcessor(model="remedis")  # Options: remedis, radimagenet

# Load and preprocess image
ct_scan = processor.load_dicom("path/to/ct_scan/")
preprocessed_ct = processor.preprocess(ct_scan)

# Generate embeddings
embeddings = processor.generate_embeddings(preprocessed_ct)

# Shape: (1, embedding_dim)  # embedding_dim depends on the model

# For 3D volumes, you might get embeddings per slice
ct_volume = processor.load_dicom("path/to/ct_volume/")
preprocessed_volume = processor.preprocess(ct_volume)
volume_embeddings = processor.generate_embeddings(preprocessed_volume, mode="3d")

# Shape: (num_slices, embedding_dim)

# Aggregate slice embeddings to volume-level
volume_embedding = processor.aggregate_embeddings(volume_embeddings, method="mean")

Complete Example

Full pipeline from image loading to embedding generation:

optional_filename.py

from honeybee.processors import RadiologyProcessor

# Initialize processor with specific model
processor = RadiologyProcessor(model="remedis")

# Load CT scan
ct_scan = processor.load_dicom("path/to/ct_scan/")

# Preprocess
preprocessed_ct = processor.preprocess(
    ct_scan,
    denoise=True,
    correct_artifacts=True,
    resample_spacing=(1.0, 1.0, 1.0),
    normalize=True
)

# Segment lungs (if chest CT)
lung_mask = processor.segment_lungs(preprocessed_ct)

# Apply mask
masked_ct = processor.apply_mask(preprocessed_ct, lung_mask)

# Generate embeddings
embeddings = processor.generate_embeddings(masked_ct)

# Use for downstream tasks
# ...

Performance Considerations

When processing large volumetric data, consider the following:

  • Use memory-efficient loading strategies through lazy evaluation
  • Process volumes slice by slice when memory is limited
  • Downsample high-resolution volumes for initial analysis
  • Leverage GPU acceleration for computationally intensive operations
  • Cache intermediate results for repeated processing

References