Skip to content

alivaezii/pbflbcnn_project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

aad88925-0f8d-42e8-b3f1-659d5bf8bfce

🧠 PBFLB-CNN Project

Phase 1 --- Lightweight & Reproducible CNN Baselines

Python PyTorch Reproducible Anti-Leakage Industrial AI


📌 Project Objective

This project aims to develop ultra-lightweight, deployment-ready CNN architectures
for defect detection (Curling) in industrial PBF-LB/P (Selective Laser Sintering) processes.

The central research question of Phase‑1:

Can we design a computationally efficient CNN that remains reliable under extreme class imbalance (~0.9%) while preserving strict anti-leakage guarantees?


🏗 Phase‑1 Architecture Overview

Four progressively designed lightweight CNN models:


Model Purpose Design Philosophy


PicoCNN Minimal baseline Ultra-small reference model

NanoCNN Compact CNN Standard lightweight convolution

NanoLightCNN Efficiency-optimized Reduced capacity, faster inference

MicroLiteCNN Hybrid DW + Conv Best trade-off: capacity vs speed

All models follow identical training and evaluation protocols to ensure fair comparison.


🔒 Experimental Integrity Framework

Phase‑1 enforces a cryptographically verifiable protocol:

  • Canonical configuration (fixed_config.json)
  • SHA256 checksum validation at runtime
  • Immutable configuration mapping
  • Deterministic execution (fixed seeds, disabled cuDNN benchmarking)
  • Strict 4-fold group-aware cross-validation
  • 20% holdout test isolation
  • No temporal or segment leakage
  • SHA1 duplicate-content verification across splits

This ensures:

✔ No configuration drift
✔ No silent hyperparameter modification
✔ No cross-fold contamination
✔ Full audit readiness


⚖ Class Imbalance Strategy

Industrial defect ratio ≈ 0.9% positives

Mitigation strategy:

  • Cost-sensitive cross-entropy (frequency-based weighting)
  • Minority-only augmentation (train split only)
  • Validation-based threshold calibration (τ* maximizing F1)
  • AUPRC reporting for imbalance-aware evaluation

No oversampling.
No distribution distortion.
No heuristic batch rebalancing.


📊 Evaluation Protocol

Per fold: - F1@τ* - AUPRC - Accuracy - Optimal threshold (τ*)

Holdout: - Final unbiased evaluation - No threshold tuning on holdout

Capacity Reporting: - Trainable parameters - MACs - FLOPs (≈ 2 × MACs) - Single-thread CPU latency


📁 Repository Structure (Phase‑1)

cnn_models/
    PicoCNN/
    NanoCNN/
    NanoLightCNN/
    MicroLiteCNN/

configs/
splits/
stats/
reports/audit/
scripts/preprocess/

Each model directory contains:

  • artifacts/
  • calib/
  • capacity/
  • configs/
  • logs/
  • meta/
  • weights/
  • model_*.py

All folds (1--4) are complete and verified.


🚀 Phase‑1 Status

✔ Architectures finalized
✔ Anti-leakage validated
✔ Cross-validation locked
✔ Holdout metrics recorded
✔ Capacity benchmarks measured
✔ Configuration cryptographically frozen

Phase‑1 is officially closed and ready for temporal modeling extension (Phase‑2: LSTM-based sequence modeling).


👤 Author

Ali Vaezi
University of Applied Science Vienna


About

Trustworthy AI for Real-Time Quality Monitoring in Polymer Additive Manufacturing

Topics

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages