สาขาวิชาวิศวกรรมคอมพิวเตอร์จัดบรรยายหัวข้อ Explainable AI and Content Security in Healthcare Applications and Beyond โดย Dr. Zulfiqar Ali ซึ่งเป็น Assistant Professor in AI for Decision Making จาก School of Computer Science and Electronic Engineering, University of Essex, สหราชอาณาจักร ณ ห้องประชุมดงยาง 3 คณะวิศวกรรมศาสตร์ มหาวิทยาลัยสงขลานครินทร์ ในวันที่ 18 เมษายน 2567 10.00-11.30 น.
บทคัดย่อ (Abstract)
Explainable AI (XAI) refers to the development of AI systems whose actions and decisions can be easily understood by humans. In the context of health applications and beyond, explainability is crucial for ensuring transparency, accountability, and trust in AI systems, especially when they are involved in making decisions that can impact human lives.
In health applications, such as medical diagnosis or treatment recommendation systems, explainable AI can help healthcare professionals understand why a certain diagnosis or treatment suggestion was made by the AI system. This understanding is essential for doctors to trust and rely on AI-driven insights and recommendations in their decision-making process. It also allows them to verify the validity of AI-generated conclusions and potentially identify errors or biases in the system.
Content security is another critical aspect, particularly in health applications where sensitive patient data is involved.
In this talk, both explainable AI and content security will be discussed in the context of vocal fold disorder assessment (10.1109/ACCESS.2017.2680467, https://doi.org/10.1016/j.jvoice.2015.08.010), zero-watermarking for medical signals (audio and image) (https://doi.org/10.1016/j.future.2019.01.050, https://doi.org/10.3390/electronics11050710), and imposter detection in forged audio (https://doi.org/10.1016/j.compeleceng.2021.107122).
ดูลิงค์รายละเอียดได้ที่ https://sites.google.com/coe.psu.ac.th/dr-zulfiqar-alis-seminar/
รับชมวิดีโอย้อนหลังได้ที่ Facebook Live: https://fb.watch/rxUo80luHJ/?mibextid=rS40aB7S9Ucbxw6v