<aside> ℹ️

Here is the transcript of the Claude Cowork chat that was connected to Notion through Notion MCP to create and update Automated Judging Criteria and Automated Judging Results.

</aside>

Are you able to access the Notion MCP connector and find the page named "NTUPC x Notion 3.0 Ideathon [JUDGES ONLY]"?

I can help you search for that Notion page! Let me use the Notion connector to find it.

Yes! I found the page. The "NTUPC x Notion 3.0 Ideathon [JUDGES ONLY]" page is accessible through your Notion workspace.

Here's what I found:

The page appears to contain resources and submission details for the Notion 3.0 Ideathon, including a submission form link and information about team submissions.

Would you like me to fetch the full content of this page so you can see all the details?

You are an avid volunteer judge for student-driven hackathons. Your next one is judging a Notion Ideathon put on by the NTUPC (Nanyang Technological University Product Club). While 5 finalists were supposed to be selected, 7 were chosen instead to present tomorrow in front of a panel of judges (3 humans and you the AI judge). Goal is to score and force rank the 7 teams, first in advance of the in-person pitch, and adjusting as needed after the in-person pitches.

Read the page you just found (Page ID: 2f11748f-3828-80f3-98e4-d84f93723930) for all details related to the Ideathon, including the submission details, requirements, idea categories, deliverables, and judging criteria. Do not read the sub-pages yet, just the parent page. Next, I will instruct you to iterate through each of the 7 submissions to score them in each of the 3 criteria categories. Once you have read through the page, save relevant info, notes, strategies for how you are going to judge, etc. in a judging.md markdown file in this project folder.

Once you have completed that action, ask me the 3 most important clarifying questions that will help you score each submission in each of the 3 categories. Once you have clarified your judging strategy, update the judging.md file. Anytime you decide to change your judging strategy or understanding of how it should be done, update the judging.md file.

Here is some additional thoughts not found on the Notion page.
* For each of the 3 scoring categories, apply a score of 1 to 10
* Attempt to get diversity of scoring, with the least impressive submission well below 5 and the most impressive submission well above 5. We do not want all the scores to be 6s, 7s, and 8s, we want to make it very clear who are the most and least impressive, so if there is a noticeable difference the score should reflect. Remember this is the top 7, so even the least impressive submission should be somewhat impressive, so let's keep the expectations high.
* You are going to notice big differences in fidelity. While more might appear to be better, keep in mind every team used AI to generate some of the content, so clarity, utility, creativity, etc. are all more important than length and details.
* In order to get higher scores, the submission will need to have done some actual work towards building, whether it's a prototype in or outside of Notion, a video, etc. While not as impressive as a working prototype, static prototypes such as mockups and charts are also important and contribute to the scoring.

I'll read the judging page and create a comprehensive judging strategy document for you.

Now I'll create a comprehensive judging strategy document based on what I've learned.

Perfect! I've read the judging page and created a comprehensive judging strategy document. Now I have 3 critical clarifying questions to help me score each submission effectively:

🔍 Three Most Important Clarifying Questions

1. Evidence Standards for Problem Clarity

The criteria asks for "evidence" (data or observations) that illustrates the problem's impact. However, student hackathon teams may not have extensive user research.

Question: How rigorous should I be about evidence requirements? Should I: