I'm a PhD candidate in Brain-Inspired Adaptive AI at the Data & AI Cluster at TU Eindhoven, Netherlands. My research focuses on bridging workings in brain with neural networks - leveraging insights from cognitive biases, multi-memory systems, and sparse coding, to design and develop effective and efficient lifelong learning models. I have worked across vision and language modalities, using semantic priors and LLMs to improve transfer and out-of-distribution robustness. I am currently exploring multi-modality and world models to couple perception with predictive modeling and long-horizon adaptation.
With 10+ years in industry as AI Research Engineer and Software Engineer, I have built machine learning and embedded software solutions for Autonomous Driving, Quality Inspection, and Healthcare. Combining industry experience with academic curiosity, I’m passionate about developing intelligent systems that are both computationally efficient and grounded in cognitive principles.
Research Interests: Brain-inspired AI, Cognitive Bias ; Generalization - Continual/Lifelong Learning and Robustness in DNNs, Foundation and Multi-Modal networks
More details in my CV
Europe-wide Centre of Excellence advancing the pillars of Adaptive, Green, Human-Centric, and Trustworthy AI, with research and implementation focused on lifelong learning, robustness, and fairness. ENFIELD connects top research labs and industry to deliver foundational advances and high-impact use cases across healthcare, energy, manufacturing, and space—coupling efficient (green) AI with reliability and governance so systems can adapt safely in dynamic real-world environments.
Designed safety patterns and build novel XAI components for critical embedded systems in autonomous driving. Our automotive case targets end-to-end detection of road users, traffic scenes; scene segmentation and trajectory prediction of road users (including vulnerable users) with explainability and safety built in, enabling timely collision-avoidance decisions in complex traffic. The work feeds a cross-sector programme (automotive/space/railway) and aligns with functional-safety requirements to accelerate trustworthy deployment
Designed and engineered a real-time perception framework for multi-camera input fusion to detect and track road users, recognize traffic signs, and parse lanes/road layout for Autonomous Driving Systems (ADS). The suite combines state-of-the-art object detection and semantic segmentation across 2D images/video and 3D point clouds, with synthetic/generative data augmentation to strengthen coverage of rare events. The end-to-end pipeline spans data sourcing and curation, model design/optimization, and embedded deployment.
Implemented an on-device perception module that detects and irreversibly blurs personal data—faces, full bodies, and license plates—in real time across video streams and stored media, enabling GDPR-compliant collection, processing, and sharing. The system follows privacy-by-design: configurable blur policies per jurisdiction and QA hooks (mask coverage, false-negative audits) for verifiable compliance.
Built algorithms to analyze geo-tagged fleet video and automatically surface infrastructure changes—e.g., new/removed traffic signs, lane geometry shifts, road works—prioritizing regions with high change velocity for HD map updates. The pipeline combines event-based scenario identification (cut-ins/outs, lane changes, lead-vehicle deceleration) with spatial–temporal clustering and confidence scoring, automating HD map maintenance for autonomous vehicle navigation.
Pioneered the integration of deep learning into NI’s inspection stack for defect inspection and an inference pipeline optimized for NI real-time/edge hardware. The solution fuses DL with classical CV (pre/post-processing, ROI proposals, rule checks) to boost accuracy while preserving determinism. To meet strict latency/throughput targets on CPU and embedded targets, optimized kernels and pipelines using Intel IPP, Intel DAAL/oneDAL, OpenVINO, Intel TBB for multithreading, and NVIDIA TensorRT where applicable
Led development of a production-grade machine-vision library delivering reliable inspection on NI embedded targets and Windows. Built core algorithms for industrial use cases—geometric/template pattern matching, robust OCR, high-accuracy 1D/2D barcode reading, edge/contour-based metrology, blob and texture analysis catch fine surface defects (metals, fabrics, PCBs), identify misprints from low-quality images. Delivered as configurable LabVIEW VIs and callable APIs (C/C++/C#).
Watch more of my dance performances in various events in Netherlands, on my YouTube channel.
Mapping the world, one journey at a time.