Mini Challenge

The mini challenge encourages the exploration of how agentic AI can automate data visualization and visual analytics. We provides the dataset to visualize (VisPubData), alongside a Mini Challenge Template capable of inputting and analyzing dataset, and producing concise data visualization reports. The template is to help get started quickly.

Evaluation

After the server closes, we will apply a review process similar to that of paper submissions. Evaluation criteria include:
  • Agent-generated report — clarity, coherence, and insightfulness;
  • Technical report — explanation of key decisions, challenges faced, and lessons learned.
  • Accepted submissions will be invited to present their works.

Submission

The template as baseline is available at Mini Challenge Template with README to explain the file structure and how to use it to get started. You can submit your agent implementation to the evaluation server once logged in. You can submit multiple times, up to 2 times per day. GuideHow to Start with Template & Submit for Challenge?

Leaderboards

In this competition, we offer both public and private leaderboards:
  • Public leaderboard: This determines the mini-challenge awards. All the Finalized Submissions are shown which reviewers will evaluate.
  • Private leaderboard: Displays all your previous submissions. You can review them and mark only one as your finalized submission.

FAQ

  • Do I need an OpenAI API key?
    You’ll need an API key to test your agent locally. However, it’s not required for submissions — for now, the evaluation server will handle all LLM calls using GPT-4o (version: 2024-11-20).
  • Will the mini-challenge count as a publication?
    Yes, awarded submissions will be published in the same format as the short paper submissions.
  • What should I submit in the end?
    You will submit a technical report (up to 2 pages via the PCS system) that (1) describes how your best-performing agent was developed and (2) includes a link to its best generated report.
  • Do we support frameworks other than LangGraph?
    Yes. As long as your implementation follows the template instructions and the interface in agent.py with your class agent and produces an output in the root directory during evaluation, you're free to use other frameworks.

Timeline

  • Submission Site Opens: June 1st, 2025
  • Challenge closes: Aug 30th, 2025
  • Author Notification: Sep 15th, 2025
  • Camera-Ready Deadline: Oct 1st, 2025
  • Workshop Day: Nov 2nd or 3rd, 2025

Forum

Please join the Discord server and discuss more about the challenge.

Organizers

Zhu-Tian ChenUniversity of Minnesota
Shivam RavalHarvard University
Enrico BertiniNortheastern University
Trevor DePodestaHarvard University
Niklas ElmqvistAarhus University
Nam Wook KimBoston College
Pranav RajanKTH Royal Institute of Technology
Renata G. RaidouTU Wien
Emily ReifGoogle Research & University of Washington
Olivia SeowHarvard University
Qianwen WangUniversity of Minnesota
Yun WangMicrosoft Research
Catherine YehHarvard University

Challenge Development Team

Pan HaoUniversity of Minnesota
Divyanshu TiwariUniversity of Minnesota
Chia-Lun(James) YangUniversity of Minnesota
Zhu-Tian ChenUniversity of Minnesota
Qianwen WangUniversity of Minnesota

Awards

Winners will be invited to present at the workshop and will receive prizes and awards. Details coming soon!