Survey Ideas

 

 

Illustrative Example from my former PhD student Prof. Faraz Hussain: https://arxiv.org/pdf/1908.00080.pdf

 

Survey Template: The survey should start with an introduction that motivates the technical area, followed by a section each on the technical rationale and scope of the survey. Then, group the papers you have studied into semantically similar topics and discuss them in a manner where the key results of the paper are clear from your summary. See the detailed template here: https://www.overleaf.com/read/bwyyrtgjfvgx#9cd366

 

Here are a few survey topics with a few papers to seed your survey work.

 

1.     Survey on Explainable AI for Satellite Data and Associated Models

 

Weiwei GU, Zenghui ZH, Wenxian YU, Xiaohua SU. Perspective on explainable SAR target recognition. Journal of RADAR (雷达学报). 2020 Jun 1;9(3):462-76. (OPTIONAL. Google Translate works well on the document.)

 

Datcu M, Huang Z, Anghel A, Zhao J, Cacoveanu R. Explainable, Physics-Aware, Trustworthy Artificial Intelligence: A paradigm shift for synthetic aperture radar. IEEE Geoscience and Remote Sensing Magazine. 2023 Feb 3;11(1):8-25.

 

Huang Z, Liu Y, Yao X, Ren J, Han J. Uncertainty Exploration: Toward Explainable SAR Target Detection. IEEE Transactions on Geoscience and Remote Sensing. 2023 Feb 22;61:1-4.

 

Andresini G, Appice A, Malerba D. SILVIA: An eXplainable Framework to Map Bark Beetle Infestation in Sentinel-2 Images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2023 Sep 6.

 

Gevaert CM. Explainable AI for earth observation: A review including societal and regulatory perspectives. International Journal of Applied Earth Observation and Geoinformation. 2022 Aug 1;112:102869.

 

 

Su S, Cui Z, Guo W, Zhang Z, Yu W. Explainable analysis of deep learning methods for sar image classification. InIGARSS 2022-2022 IEEE International Geoscience and Remote Sensing Symposium 2022 Jul 17 (pp. 2570-2573). IEEE.

 

Huang Z, Datcu M, Pan Z, Lei B. A hybrid and explainable deep learning framework for SAR images. InIGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium 2020 Sep 26 (pp. 1727-1730). IEEE.

 

Huang Z, Yao X, Liu Y, Dumitru CO, Datcu M, Han J. Physically explainable CNN for SAR image classification. ISPRS Journal of Photogrammetry and Remote Sensing. 2022 Aug 1;190:25-37.

 

Richard G, Le Caillec JM, Habonneau J, Gueriot D. A Deep SAS ATR explainability framework assessment. InOCEANS 2021: San Diego–Porto 2021 Sep 20 (pp. 1-5). IEEE.

 

 

 

2.     Survey on Explainable AI for Electromagnetic Radiation Data and Associated Models

 

Grimes T, Church E, Pitts W, Wood L. Explanation of Unintended Radiated Emission Classification via LIME. arXiv preprint arXiv:2009.02418. 2020 Sep 4.

Vann JM, Karnowski T, Anderson AL. Classification of unintended radiated emissions in a multi-device environment. IEEE Transactions on Smart Grid. 2018 Dec 7;10(5):5506-13.

 

Vann JM, Karnowski TP, Kerekes R, Cooke CD, Anderson AL. A dimensionally aligned signal projection for classification of unintended radiated emissions. IEEE transactions on electromagnetic compatibility. 2017 Apr 24;60(1):122-31.

 

Qu L, Yang J, Liu H, Huang K, Hu P, Li X, Zhang Y. Specific Emitter Identification Based on Two Residual Networks. In2021 IEEE 4th Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC) 2021 Jun 18 (Vol. 4, pp. 1863-1869). IEEE.

 

Dash P, Naik K. A very deep one dimensional convolutional neural network (VDOCNN) for appliance power signature classification. In2018 IEEE Electrical Power and Energy Conference (EPEC) 2018 Oct 10 (pp. 1-6). IEEE.

 

Sheets G, Bingham P, Adams MB, Bolme D, Stewart SL. Preprocessing for Unintended Conducted Emissions Classification with ResNet. Applied Sciences. 2021 Sep 22;11(19):8808.

 

Alexander, F.J., Borders, T., Sheffield, A. and Wonders, M., 2020. Workshop Report for Next-Gen AI for Proliferation Detection: Accelerating the Development and Use of Explainability Methods to Design AI Systems Suitable for Nonproliferation Mission Applications (No. BNL-221083-2021-FORE). Brookhaven National Lab.(BNL), Upton, NY (United States); Idaho National Lab.(INL), Idaho Falls, ID (United States); National Nuclear Security Administration (NNSA), Washington, DC (United States).

 

Mortreau P, Berndt R. Determination of the uranium enrichment with the NaIGEM code. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 2004 Sep 11;530(3):559-67.

 

Mollel RS, Stankovic L, Stankovic V. Explainability-informed feature selection and performance prediction for nonintrusive load monitoring. Sensors. 2023 May 17;23(10):4845.

 

Ryu J, Park C, Park J, Cho N, Park J, Cho G. Development of neural network model with explainable AI for measuring uranium enrichment. IEEE Transactions on Nuclear Science. 2021 Sep 28;68(11):2670-81.

 

 

 

3.     Survey on Explainable AI for Malware and Binaries

 

Severi G, Meyer J, Coull S, Oprea A. {Explanation-Guided} backdoor poisoning attacks against malware classifiers. In30th USENIX security symposium (USENIX security 21) 2021 (pp. 1487-1504).

 

Sun R, Wang W, Dong T, Li S, Xue M, Tyson G, Zhu H, Guo M, Nepal S. Measuring vulnerabilities of malware detectors with explainability-guided evasion attacks. arXiv preprint arXiv:2111.10085. 2021 Nov 19.

 

Kim DY, Jeong AY, Lee TJ. Analysis of malware group classification with eXplainable artificial intelligence. Journal of the Korea Institute of Information Security & Cryptology. 2021;31(4):559-71.

 

Manthena H, Kimmel JC, Abdelsalam M, Gupta M. Analyzing and Explaining Black-Box Models for Online Malware Detection. IEEE Access. 2023 Mar 9;11:25237-52.

Prasse P, Brabec J, Kohout J, Kopp M, Bajer L, Scheffer T. Learning explainable representations of malware behavior. InMachine Learning and Knowledge Discovery in Databases. Applied Data Science Track: European Conference, ECML PKDD 2021, Bilbao, Spain, September 13–17, 2021, Proceedings, Part IV 21 2021 (pp. 53-68). Springer International Publishing.

 

Sun R, Xue M, Tyson G, Dong T, Li S, Wang S, Zhu H, Camtepe S, Nepal S. Mate! Are you really aware? An explainability-guided testing framework for robustness of malware detectors. InProceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering 2023 Nov 30 (pp. 1573-1585).

 

Rosenberg I, Meir S, Berrebi J, Gordon I, Sicard G, David EO. Generating end-to-end adversarial examples for malware classifiers using explainability. In2020 international joint conference on neural networks (IJCNN) 2020 Jul 19 (pp. 1-10). IEEE.

 

Alenezi R, Ludwig SA. Explainability of cybersecurity threats data using SHAP. In2021 IEEE Symposium Series on Computational Intelligence (SSCI) 2021 Dec 5 (pp. 01-10). IEEE.

 

Mitchell J, McLaughlin N, Martinez-del-Rincon J. Generating Sparse Explanations for Malicious Android Opcode Sequences using Hierarchical LIME. Computers & Security. 2023 Dec 7:103637.

 

Rahman T, Ahmed N, Monjur S, Haque FM, Kabir N. PDFGuardian: An innovative approach to interpretable PDF malware detection using XAI with SHAP framework (Doctoral dissertation, Brac University).

 

Manthena H. Explainable Machine Learning Based Malware Analysis (Doctoral dissertation, North Carolina Agricultural and Technical State University).

 

 

 

4.     Survey on Explainable AI for Transformers and LLMs

 

Sun T, Chen H, Qiu Y, Zhao C. Efficient Shapley Values Calculation for Transformer Explainability. InAsian Conference on Pattern Recognition 2023 Nov 5 (pp. 54-67). Cham: Springer Nature Switzerland.

 

Kashefi R, Barekatain L, Sabokrou M, Aghaeipoor F. Explainability of Vision Transformers: A Comprehensive Review and New Perspectives. arXiv preprint arXiv:2311.06786. 2023 Nov 12.

 

Parcalabescu L, Frank A. On Measuring Faithfulness of Natural Language Explanations. arXiv preprint arXiv:2311.07466. 2023 Nov 13.

 

Zhao H, Chen H, Yang F, Liu N, Deng H, Cai H, Wang S, Yin D, Du M. Explainability for large language models: A survey. ACM Transactions on Intelligent Systems and Technology. 2023.

 

Miglani V, Yang A, Markosyan AH, Garcia-Olano D, Kokhlikyan N. Using Captum to Explain Generative Language Models. arXiv preprint arXiv:2312.05491. 2023 Dec 9.

 

Enouen J, Nakhost H, Ebrahimi S, Arik SO, Liu Y, Pfister T. TextGenSHAP: Scalable Post-hoc Explanations in Text Generation with Long Documents. arXiv preprint arXiv:2312.01279. 2023 Dec 3.

 

Huang S, Mamidanna S, Jangam S, Zhou Y, Gilpin LH. Can large language models explain themselves? a study of llm-generated self-explanations. arXiv preprint arXiv:2310.11207. 2023 Oct 17.

 

Slack D, Krishna S, Lakkaraju H, Singh S. Explaining machine learning models with interactive natural language conversations using TalkToModel. Nature Machine Intelligence. 2023 Aug;5(8):873-83.

 

Zhao H, Chen H, Yang F, Liu N, Deng H, Cai H, Wang S, Yin D, Du M. Explainability for large language models: A survey. ACM Transactions on Intelligent Systems and Technology. 2023.

 

Kumar A, Agarwal C, Srinivas S, Feizi S, Lakkaraju H. Certifying llm safety against adversarial prompting. arXiv preprint arXiv:2309.02705. 2023 Sep 6.

 

 

 

5.     Survey on Strengths and Limitations of Current Explainable AI Software

 

https://cloud.google.com/explainable-ai

 

https://xaitk.org

 

https://www.fiddler.ai/explainable-ai

 

https://captum.ai

 

https://github.com/interpretml/interpret

 

https://github.com/Trusted-AI/AIF360

 

https://github.com/SeldonIO/alibi

 

https://github.com/eli5-org/eli5

 

https://github.com/marcotcr/lime

 

https://github.com/shap/shap