Mini Challenge

The mini challenge encourages the exploration of how agentic AI can automate data visualization and visual analytics. We provides the dataset to visualize (VisPubData), alongside a Mini Challenge Template capable of inputting and analyzing dataset, and producing concise data visualization reports. The template is to help get started quickly.

Evaluation

After the server closes, we will apply a review process similar to that of paper submissions. Evaluation criteria include:
  • Agent-generated report — clarity, coherence, and insightfulness;
  • Technical report — explanation of key decisions, challenges faced, and lessons learned.
  • Accepted submissions will be invited to present their works.

Submission

The template as baseline is available at Mini Challenge Template with README to explain the file structure and how to use it to get started. You can submit your agent implementation to the evaluation server once logged in. You can submit multiple times, up to 10 times per day, and each submission should be at most 1M tokens in total. By the submission deadline, you will submit a technical report (VGTC short paper format, up to 2 pages via the PCS system) that (1) system) that (1) describes how your best-performing agent was developed and (2) includes a link to its best generated report.
PCS TrackPCS Category
GuideHow to Start with Template & Submit for Challenge?

Leaderboards

In this competition, we offer both public and private leaderboards:
  • Public leaderboard: This determines the mini-challenge awards. All the Finalized Submissions are shown which reviewers will evaluate.
  • Private leaderboard: Displays all your previous submissions. You can review them and mark only one as your finalized submission.

FAQ

  • Do I need an LLM's API key?
    You'll need an API key to test your agent locally. However, it's not required for submissions — for now, the evaluation server will handle all LLM's API calls. Multiple LLMs are supported, including GPT-4o, O1, and O4-mini.
  • Will the mini-challenge count as a publication?
    Yes, awarded submissions will be published in the same format as the short paper submissions.
  • Do we support frameworks other than LangGraph?
    Yes. As long as your implementation follows the template instructions and the interface in agent.py with your class agent and produces an output in the root directory during evaluation, you're free to use other frameworks.

Timeline

  • Submission Site Opens: June 1st, 2025
  • Challenge closes: Aug 20th, 2025
  • Author Notification: Sep 1st, 2025
  • Camera-Ready Deadline: Oct 1st, 2025
  • Workshop Day: Nov 3rd, 2025

Forum

Please join the Discord server and discuss more about the challenge.

Awards

Challenge finalists teams will receive a share of $6,000 in AWS cloud credits:

  • $3K for the Top1 Winner
  • $1.5K for the Runner-Up
  • $500 for 3 Honorable Mentions

AWS