Q1 at Booth: three competitions, three different lessons
Adobe case competition finalist who couldn't close. A hackathon win with two prizes out of five categories. A VCIC that didn't clear the first round. First quarter at Booth taught me more from the losses than the wins.
My first quarter at Booth included three competitions across three very different formats. I won one, placed in another, and did not advance in the third. The pattern across all three taught me something more specific than any single result would have.
Adobe Case Competition: finalist, but the wrong kind of finalist
My team made it to the top 10 finalists in the Adobe Case Competition — one of ten teams out of the full field. By any objective measure, that is a strong result for a first-quarter team still finding its footing.
But we knew why we had not won, and we knew it before the results came out.
We had put real work into the case. The data gathering was thorough. The competitor analysis was detailed. We understood the market clearly. What we did not do was step back early enough to ask: what is the decision-maker actually trying to solve, and what would it look like to give them a usable path forward?
It had been a long time since any of us had been in a case competition environment. That gap showed. We had defaulted to a research and analysis mode — the kind of work that feels productive because it produces volume — and we had not reserved enough time or mental energy for synthesis and recommendation. We delivered a strong analysis and a weak answer.
The feedback was useful precisely because it was clear. The gap was not in effort. It was in orientation. A case competition is not asking you to demonstrate that you understand the problem. It is asking you to demonstrate that you can make a call and stand behind it. Those are different skills, and one of them atrophies faster than the other.
I left that competition with a specific thing to fix: stop building toward comprehensiveness when the job is to build toward a decision.
Booth Hackathon: 48 hours, 37 teams, and Choreo
The Booth Hackathon was the kind of event that is hard to describe accurately to someone who was not there. 48 hours. 37 teams. Five prize categories. The judging panel included industry practitioners, consultants, and VCs — people with sharp pattern recognition for what is real and what is not.
We built Choreo.
The idea came from a gap we had genuinely been thinking about before the hackathon started. Most AI assistants on the market are built around corporate use cases — enterprise workflows, team collaboration, organizational knowledge. Choreo was built around the individual: a personal AI assistant that is truly yours, more in the spirit of an open, personal companion than a productivity tool for a company. Think of it as the individual-first counterpart to the wave of enterprise AI that had been dominating the space.
The product resonated because it came from a real observation about what was missing, not from reverse-engineering what judges might want to see.
We entered five categories. We won two: Best Use of AI and Most Likely to Become a Unicorn.

Winning Best Use of AI mattered to me in a specific way. It is easy to claim AI credit in a hackathon by adding a model call somewhere visible and hoping the demo holds. That is not what we did. The AI in Choreo was load-bearing — central to why the product worked, not an annotation on top of something else. The prize was recognition that the integration was substantive.
Most Likely to Become a Unicorn is a different kind of signal. It reflects whether the panel believes the idea has the structural properties of a scalable business — real market gap, defensible framing, founder-market fit. Winning that in a room of Booth teams, being evaluated by VCs, carries more weight than it would in most other contexts.
Presenting under that kind of pressure — to a panel with genuine expertise, in a room of peers who are also sharp — was one of the better experiences of the quarter. It confirmed something about how I work: I do better when the stakes are real and the feedback is immediate.

VCIC: the team presentation that did not qualify
The Venture Capital Investment Competition (VCIC) was the third competition of the quarter, and the one where the gap was hardest to diagnose.
The format is different from a case competition or hackathon. You form a team, present the team's investment thesis and the strengths you are bringing to the table, select two companies to bet on, and justify your reasoning to the judges as if you were pitching a fund. The judges are evaluating both the quality of your analysis and the coherence of the team as an investment vehicle.
Our team did not qualify to advance.
Looking back, the issue was not the company selection or the investment analysis — we had reasonable views on both. The issue was how we presented ourselves as a team. VCIC is partially asking: would I trust these people to deploy capital and add value as investors? That is a question about conviction, communication, and team chemistry under pressure. We had not prepared for that dimension as carefully as we had prepared for the analytical dimension.
The lesson was about surface area. Every competition has an explicit evaluation and an implicit one. The explicit one is the part you can prepare for from the brief. The implicit one is the part that filters for judgment, presence, and coherence — and it is usually the part that separates teams that advance from teams that do not.
The pattern across all three
Adobe: strong analysis, weak recommendation. VCIC: solid thesis, weak team presentation. Hackathon: strong product idea, presented under real pressure to real evaluators — and it landed.
The common thread is that what I was actually being evaluated on was never quite the same as what the brief described. The brief tells you the topic. It does not tell you what the judges are using to separate the field.
First quarter at Booth gave me three data points on that gap. I am going to keep refining my read of it.