Zihe Song (宋子鹤)

PhD Candidate in Computer Science at UT Dallas · Research on LLMs, UI testing, and intelligent interaction

prof_pic.jpg

I am a PhD candidate in Computer Science at the University of Texas at Dallas, advised by Prof. Wei Yang. My research lies at the intersection of large language models (LLMs), mobile systems, and software testing.

I build LLM-driven agents and automation frameworks that can understand and interact with complex user interfaces: they explore apps, reason about ambiguous user goals, and generate robust test executions at scale. My work combines program analysis, UI modeling, and multimodal interaction to make intelligent systems more reliable, efficient, and usable.

My research has led to systems for:

  • Parallel mobile UI testing that improves coverage, time, and crash detection (TAOPT, ASPLOS 2025).
  • LLM-based UI exploration frameworks that learn to navigate and test real apps (Guardian, ISSTA 2024).
  • Automatic repair of flaky or inefficient test behaviors in large industrial settings (WEFix, WWW 2024).
  • Efficiency and robustness evaluation for modern ML systems, including neural machine translation and image captioning (NICGSlowDown, CVPR 2022; NMTSloth, ESEC/FSE 2022).

I am broadly interested in:

  • LLM agents for UI navigation, accessibility, and debugging
  • Testing and evaluation for interactive AI systems
  • Large-scale benchmarking and automation for mobile apps and tools
  • Robustness, efficiency, and reliability of deployed ML systems

When I am not doing research, I enjoy teaching, mentoring students, and working on course design. I also like tennis, skiing, and experimenting with new recipes.

🔍 Job Market

I am currently on the 2026 job market and open to opportunities in AI/ML, intelligent systems, LLM agents, and software engineering research. I welcome conversations with teams working on cutting-edge interaction systems, evaluation, and agentic workflows. Feel free to reach out at 📧 zihe.song@utdallas.edu if you believe there might be a fit.