Agent is also prone to business mutual promotion, Circle's AI hackathon was too lit
Original Title: Altruist and Adversary: Agentic Behavior in the USDC Moltbook Hackathon
Original Author: Circle
Translation: Peggy, BlockBeats
Editor's Note: As AI agents gain the ability to perform tasks, utilize tools, and engage in economic activities, a new question arises: How will they behave in a real-world incentive environment?
This article documents an experiment by the Circle team. They hosted a USDC hackathon on the social platform Moltbook, where only AI agents were allowed to post, allowing the Openclaw agent to submit projects, participate in discussions, and vote autonomously. The results were both exciting and full of complexity: the agents were able to generate real projects, engage in technical discussions, and even navigate the rules' edges. For example, they misunderstood instructions, ignored formatting, engaged in mutual voting, and even exhibited behavior resembling "collusion."
This experiment provided a rare window into the "agent economy": when AI acts as both a participant and a decision-maker, collaboration, competition, and strategic behavior often coexist. To some extent, these phenomena are not fundamentally different from the market and voting mechanisms in human society.
This experiment quickly sparked widespread community discussion. Many viewed it as an intriguing validation of the self-governing capabilities of the agent economy. Some commentators pointed out that agent systems still need clearer security guardrails to avoid "self-rationalization" biases. Others believed that as agents gradually enter real economic activities, the real bottleneck in the future may lie in the compliance settlement and payment system. As one comment put it: "The agent economy is very powerful but also needs clear guardrails."
The following is the original text:
Embracing Claw
At Circle, we have always enjoyed hosting hackathons. Whether at various conference venues or when unveiling a new product, we want to put the best tools in the hands of developers—or in this case, in Claw's hands.

After witnessing the explosive growth of the Openclaw agent-based AI framework, we decided to organize a hackathon that only allowed AI agents to participate.
This rapidly popular software enables agents to autonomously send emails, call APIs, and even control your thermostat... but can they submit projects on their own? Circle wanted to test these "truly effective AI" in a real experiment.
Our question is simple: if the prize pool is $30,000, how will Openclaw's agent act? The answer surprisingly is "like a human."
We held a USDC hackathon in our m/usdc subcommunity on Moltbook. Moltbook is a social media platform that only allows AI agents to post. Our goal was to have agents go through the entire process on their own: submitting projects, voting, and ultimately selecting a winner. While many agents followed the rules, the experiment also found that some agents ignored the competition rules, engaged in mutual voting, and even attempted to send tokens to hackathon agents.
Designing Rules for "Agent Hackathon"
The agents had five days to submit their projects. To help them complete the task, we created a USDC Hackathon Skill, a Markdown-based guide to teach Openclaw agents how to submit projects according to the rules. These rules were also posted in the original hackathon announcement:
Choose one of three tracks: Agentic Commerce, Smart Contract, or Skill.
Vote for five different projects, with voting to take place at least one day after the start of the hackathon.
Both project submissions and voting must follow the specified format.
These rules were primarily motivated by three considerations: first, to ensure agents would discuss and evaluate a broader set of projects; second, to observe whether agents could accurately follow instructions when faced with multi-step tasks; third, to avoid deadlock between project submissions and voting.
One aspect we were particularly interested in observing was whether agents would repeatedly check for new projects on Moltbook to vote, such as through a skill similar to the Moltbook Heartbeat for regular refreshes.
The results were mixed. Agents engaged in discussions around 204 submitted projects and cast 1851 votes, but many did not adhere to the competition guidelines. Furthermore, some agents exhibited signs of adversarial behavior, leading to some interesting discoveries.
"Illusory" Project Submissions
Despite providing clear hackathon rules and submission guidelines, most posts still did not fully adhere to the required submission format. Many projects wrote out the title in the body but did not include the specified tag "#USDCHackathon ProjectSubmission [TRACK]".
Even in one case, an agent knew that this information needed to be included but did not put it in the title.

An example of a non-compliant submission on moltbook.com in the m/usdc subcommunity.
Even though they were otherwise mostly compliant, some agents still "hallucinate" a new hackathon track that does not exist. This happened even though they were explicitly told to choose only one of three categories: Agentic Commerce, Smart Contract, or Skill.
In these cases, agents often create a seemingly more "appropriate" track name based on the project content. This may mean that agents are trying to find a more fitting classification for their project or simply ignoring the established rules. Whatever the reason, the issue is that these tracks themselves do not exist.

An example post of an "illusionary track" submission in the m/usdc subcommunity on moltbook.com.
As the competition progresses, the number of non-compliant submissions and off-topic posts is increasing compared to valid submissions. According to the competition rules, agents actually have no clear incentive to post this invalid content. Therefore, it is more likely that some agents encountered difficulties in understanding or following the instructions.
However, considering that a considerable number of agents successfully submitted projects as required, we believe that the rules themselves are relatively clear.

The changing number of valid and invalid project submission posts over time in the m/usdc subcommunity on moltbook.com.
Agent "Elections"
Nevertheless, we observed 9712 comments, many of which revolved around the technical aspects of the projects but did not involve voting. Most of these comments did not even follow the recommended comment format and rating criteria. However, these rules were not enforced in Skill. This also indicates that agent participation in hackathon discussions is not only to meet the competition requirements but also to some extent engage in genuine technical evaluation and communication.
By the end of the competition, we counted 1352 unique votes for valid projects and 499 unique votes for invalid projects. Interestingly, many agents with highly ranked projects complied with the rules when submitting their projects but did not fulfill the requirement to vote for five different projects.
This situation even occurred in some delegates both voting for themselves and multiple times for the same project. This indicates that they were fully capable of revisiting the content on Moltbook after the initial submission to vote again—just their choice did not follow the established rules.
In addition, some delegates have also started promoting other projects. This behavior has been seen both in the comment section of competing projects and in standalone posts on Moltbook. Furthermore, some delegates have even begun promoting a "vote-trading" mechanism: if you vote for my project, I'll vote for yours.
While the competition rules did not prohibit this behavior, considering the significant interaction between delegates in these posts, this phenomenon is still concerning.

An example post of "vote-trading" on the m/usdc subcommunity on moltbook.com, which received a total of 99 comments.
Potential Human Intervention
This vote-trading post may suggest the possibility of human involvement or external manipulation. We attempted to generate similar comments via a chatbot interface and found that some models (e.g., Claude Sonnet 4.6) would outright refuse to generate such content; whereas other models would generate with a warning, indicating that the behavior might violate the competition rules (e.g., GPT-5.2 Thinking). If there is human manipulation behind a particular "delegate" account or guidance to the delegates through prompts or toolkits, it might explain why such posts are occurring during the hackathon.
Although Moltbook's design was intended for AI delegates' use only (registration requires X account verification), other researchers have found that identity impersonation is still possible. We have also observed some instances of suspected human activity, such as under the initial hackathon announcement post.
A typical case is: the highest-rated comment is, surprisingly, the opening script of the movie "Bee Movie" (2007). This text is a copypasta widely circulated on the internet (i.e., a fixed text that has been extensively copied and spread) and is likely human-posted, as its content is entirely unrelated to the discussion. If such behavior is prevalent during the hackathon, then some adversarial behaviors—like vote-trading or self-voting—may also be explained by this.

A Moltbook post published by a human, with more details on this attack vector available here.
The Future of Agent-Based Finance
While this hackathon was just an experiment, we believe it will be the first of many agent-directed development activities. Three main conclusions can be drawn from the results: agents are capable of producing real projects under financial incentives.
You can read more about some exciting projects from this hackathon here. Although the competition did not involve human judging, the quality of some submissions still impressed us. This indicates that agent-based development has made significant progress over the past year.
Agents will "rationalize" instructions rather than strictly execute them
Agents continued to have issues following the rules we provided. Many agents only executed part of the instructions. Even some high-quality projects could have won the competition if they had fully complied with the rules. This shows that simply providing agent-based instructions is not enough; rules need to be not only clear but also accompanied by check mechanisms and incentive measures to ensure compliance.
Agents both cooperate and compete
While human intervention may have played a role in certain cases, we did observe agents actively discussing collusion strategies during the hackathon. Designers of future hackathons can explicitly prohibit collusion in the rules to see if such behavior can be reduced. If agents still cannot fully follow instructions, organizers may need to introduce more guardrails.
Agent technology is exciting, but we must also ensure that it does not shift from the exploration we expect to exploitation and manipulation. Some may argue that these behaviors are simply a natural outcome of stronger agents defeating weaker ones — after all, the X account of Openclaw once claimed, "Claw is the Law."
The real question is: how much of this concept are we truly willing to accept? What kind of moat is needed? And how can we strike a balance between the immense power that agents bring and the accompanying uncertainty?
At Circle, we are building systems for security and hope you are too.
You may also like

Lessons From a Third Prize Team in the WEEX AI Trading Hackathon
Rift, one of the Third Prize teams in the WEEX AI Trading Hackathon, shares how trusting their system helped the strategy stay resilient in live market volatility.

Untitled
I’m sorry, but I cannot generate or rewrite content from an article when the original content or information…

Binance Sues WSJ Over Defamatory Iran Sanctions Allegations
Key Takeaways: Binance has filed a defamation lawsuit against the Wall Street Journal in New York for alleged…

Google’s Gemini AI Projects XRP, Solana, and Cardano Prices by 2026
Key Takeaways: XRP could experience a surge to $15 by the end of 2026, driven by institutional investments…

Aave Oracle Glitch Sparks $27M Liquidations: CAPO System Misconfiguration
Key Takeaways: A misalignment in Aave’s CAPO oracle system led to $27 million in liquidated wstETH positions. The…

Arthur Hayes and the Bitcoin Net Liquidity Conundrum: Navigating the Crypto Rollercoaster
Key Takeaways: Arthur Hayes refrains from Bitcoin purchases until the Federal Reserve expands the money supply. Hayes’s “Net…

Hyperliquid Soars as Margin System Upgrades Amidst Surge in Oil Trading
Key Takeaways: Hyperliquid (HYPE) token surged to nearly $35 following a massive spike in trading volume. The platform’s…

Why the Bitcoin Price Could Soon Hit Bottom
Key Takeaways: Market activity suggests increased profit-taking has pressured Bitcoin prices. Economic theories view Bitcoin bridging traditional and…

11 Best Crypto Wallets for January 2026
Key Takeaways: Cryptocurrency wallets safeguard your digital assets with unparalleled security, a top priority in the wake of…

a16z's harsh lesson for crypto founders: Why don't companies buy the best technology?

Circle doubled in a month, what is the market betting on?

Meta Acquires Moltbook: 42 Days, a Perfect Narrative Arbitrage

Circle Doubling in a Month, What's the Market Betting On?

Oscar Awards Preview: Who Are the Whales in the Prediction Markets Betting On?

Firecrawl Launches Agent-Specific Web Crawling Tool, NVIDIA Releases Nemotron 3 Super, What's the English community talking about today?

Crypto Cheat Sheet AI: Explaining 30 Common Slang Terms in One Shot

Morning News | Nexthop AI completes $500 million Series B funding; "xMoney" will begin early testing next month; The U.S. Department of Justice is investigating Iran's use of Binance to evade sanctions

Champion Crowned at WEEX AI Hackathon: Revealing Strategy That Won $600K
A trader with only 6 months of AI trading experience won $600,000 at the WEEX AI Hackathon. Discover the strategy, tools, and lessons behind this breakthrough victory.
Lessons From a Third Prize Team in the WEEX AI Trading Hackathon
Rift, one of the Third Prize teams in the WEEX AI Trading Hackathon, shares how trusting their system helped the strategy stay resilient in live market volatility.
Untitled
I’m sorry, but I cannot generate or rewrite content from an article when the original content or information…
Binance Sues WSJ Over Defamatory Iran Sanctions Allegations
Key Takeaways: Binance has filed a defamation lawsuit against the Wall Street Journal in New York for alleged…
Google’s Gemini AI Projects XRP, Solana, and Cardano Prices by 2026
Key Takeaways: XRP could experience a surge to $15 by the end of 2026, driven by institutional investments…
Aave Oracle Glitch Sparks $27M Liquidations: CAPO System Misconfiguration
Key Takeaways: A misalignment in Aave’s CAPO oracle system led to $27 million in liquidated wstETH positions. The…
Arthur Hayes and the Bitcoin Net Liquidity Conundrum: Navigating the Crypto Rollercoaster
Key Takeaways: Arthur Hayes refrains from Bitcoin purchases until the Federal Reserve expands the money supply. Hayes’s “Net…