Skip to main content

👋 Hello

I'm Daniel Bethell

Research Associate in AI at University of York working on uncertainty quantification, safe reinforcement learning, and trustworthy machine learning.

Profile photo

Recent

Robust Adversarial Quantification via Conflict-Aware Evidential Deep Learning
·2126 words·10 mins· loading · loading
Uncertainty Evidential Deep Learning Papers Explained
Learning to Navigate Under Imperfect Perception: Conformalised Segmentation for Safe Reinforcement Learning
·2503 words·12 mins· loading · loading
Uncertainty Conformal Prediction Safe Reinforcement Learning Papers Explained
Do Some Research Areas Get an Easier Accept? The Quiet Biases Hiding in ICLR's Peer Review
·2684 words·13 mins· loading · loading
Discussion
Safe Reinforcement Learning in Black-Box Environments via Adaptive Shielding
·1969 words·10 mins· loading · loading
Safe Reinforcement Learning Shielding Papers Explained
A Comprehensive Guide to Conformal Prediction: Simplifying the Math, and Code
·14601 words·69 mins· loading · loading
Uncertainty Conformal Prediction
Demystifying Kolmogorov-Arnold Networks: A Beginner-Friendly Guide with Code
·2347 words·12 mins· loading · loading
Kolmogorov-Arnold Networks