Note: This article is just one of 60+ sections from our full report titled: The 2024 Legal AI Retrospective - Key Lessons from the Past Year. Please download the full report to check any citations.
Evaluating XAI Methods
Explanation: We need to set up rules and benchmarks for Legal AI, so we can consistently evaluate and improve them.
Challenge:
• Lack of standardized evaluation methods
• Difficulties in human subject evaluations
Research Direction:
• Develop comprehensive evaluation metrics
• Implement standardized protocols for human evaluations

Alex Denne, Head of Growth (Open Source Law) at Ƶ, is a legal tech leader and serial founder with over a decade of experience driving innovation and making legal services more accessible. Since joining in 2021, he has scaled the platform from 200 to over 120,000 users, combining deep contract law expertise with a data-driven, open-source approach. He is passionate about democratizing legal knowledge through AI, backed by strong academic credentials and experience leading major product and innovation initiatives.
Alex Denne, Head of Growth (Open Source Law) at Ƶ, is a legal tech leader and serial founder with over a decade of experience driving innovation and making legal services more accessible. Since joining in 2021, he has scaled the platform from 200 to over 120,000 users, combining deep contract law expertise with a data-driven, open-source approach. He is passionate about democratizing legal knowledge through AI, backed by strong academic credentials and experience leading major product and innovation initiatives.