<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Evidential Deep Learning on Daniel Bethell</title><link>https://daniel-bethell.co.uk/tags/evidential-deep-learning/</link><description>Recent content in Evidential Deep Learning on Daniel Bethell</description><generator>Hugo -- gohugo.io</generator><language>en</language><copyright>© 2026 Daniel Bethell</copyright><lastBuildDate>Sun, 29 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://daniel-bethell.co.uk/tags/evidential-deep-learning/index.xml" rel="self" type="application/rss+xml"/><item><title>Robust Adversarial Quantification via Conflict-Aware Evidential Deep Learning</title><link>https://daniel-bethell.co.uk/posts/cedl/</link><pubDate>Sun, 29 Mar 2026 00:00:00 +0000</pubDate><guid>https://daniel-bethell.co.uk/posts/cedl/</guid><description>Quantifying uncertainty to reject out-of-distribution/adversarially attacked inputs in crucial for deploying deep learning models in the real-world. Our proposed method, C-EDL, boosts robustness in Evidential Deep Learning by detecting conflict from input transformations, improving OOD and adversarial detection without retraining, while keeping high accuracy and low overhead.</description></item></channel></rss>