Observations and personal opinions from BonnaZhu of Nothing Research Partner. The following content does not constitute any investment advice.
1. About doing it yourself vs. doing it based on Poly/Kalshi
The main reason most project teams want to create their own prediction markets is ultimately to leverage the valuations of Polymarket and Kalshi to achieve higher capital market premiums, but this path is not so easy. A few days ago, I saw someone on CT discussing: Why are most of the projects coming out of the Polymarket Builder Program broker/aggregator projects?
Thinking about it, it makes sense: Poly and Kalshi are essentially "information exchanges," and the ecosystem of an exchange is nothing but the old cliché of aggregation, diversion, and tools, right? Liquidity is the lifeblood of an exchange. How could Poly and Kalshi, after investing so much time and resources to build their moat, easily hand over "depth" and "price discovery"?
In this case, for a new project to break the deadlock, in addition to relying on Pre-TGE airdrops, it must also think about what to rely on after TGE. If it cannot figure that out, it may be wise to pivot early, unless you are just here to cut a wave.
After all, even Robinhood chose to connect with Kalshi, instead of creating another prediction market themselves. As these two ecosystems grow larger, their liquidity advantage on mainstream events will only become stronger. In the end, the prediction market track will likely only have these two as the 'main players'.
Isn't the liquidity of large tokens on many small and medium crypto exchanges directly connected to Binance?
2. Mainstream Events vs. Long-tail Events
But does this mean that small platforms have no opportunity?
Not necessarily. Poly / Kalshi primarily focuses on 'mainstream events': standardized, publicly available, and highly media-covered targets such as elections, macro data, sports, and politics. Do they ignore long-tail events? In fact, it is because the model cannot support them well. After all, maintaining an order book is very costly, and dealing with a bunch of trading users and low-volume long-tail events is not cost-effective and may not earn much money. This is somewhat similar to mainstream coins vs. altcoins:
- Mainstream coins are most logically matched on Binance and Okx
- Various altcoins are more suitable for trading on on-chain DEXs or small to medium platforms
Thus, small and medium platforms may evolve to connect mainstream events with Poly and Kalshi, while providing more choices for users on long-tail events to achieve differentiation. However, don’t have too high expectations for liquidity, since unlike altcoins, there are no project teams acting as market makers, and what can be done for a single event, especially since long-tail events may not have many participants.
The final shape of this part may focus more on what I have that you do not, and also on allowing users to create events themselves, but there are theoretically many pitfalls involved, including arbitration, oracles, etc.
3. AMM is not the core competitiveness
Many people might think that the core competitiveness of on-chain prediction markets lies in AMM, but in fact, if you do some research, the AMM models for prediction markets (including the earliest ones used by Polymarket and Gnosis, LMSR) have long been open-sourced and repeatedly studied.
The real difficulty does not lie in the formula,
but in who is willing to continuously provide liquidity.
The essence of prediction markets is binary options, and the property of binary options is that their gamma skyrockets before expiration, making it almost impossible to hedge, not to mention non-asset price events, those event-based targets without futures markets for hedging. It is normal for liquidity providers to incur losses, which is entirely different from the impermanent loss in crypto AMMs.
So when certain projects claim to have solved impermanent loss, I don't believe it at all, because that goes against the essence of binary options. Spot AMMs might still be able to recover some losses through sufficiently high trading fees, but if binary options AMMs also rely on high fees to support LPs, a single transaction could easily take away several points in fees, making it unplayable for users. This also means that if you adopt the AMM mechanism, the platform will likely end up providing liquidity itself, and it depends on how long your funds can sustain losses.
Poly initially used AMM, only later shifting to an order book. There are certainly reasons related to AMM slippage being high and unfriendly for large trades, but the core issue is that the platform itself keeps losing money and is forced to continue investing.
4. The PumpFun moment of prediction markets?
In my view, the biggest insight from PumpFun is not about democratizing token issuance, but rather about crowding liquidity, allowing users who speculate on buying tokens during the internal market phase to indirectly complete the work of gathering public liquidity. After graduation, the gathered funds and tokens form LPs, which can then be burned to create permanent liquidity for users to trade.
For prediction markets, it is unrealistic to single out the role of a liquidity provider because liquidity providers are bound to incur losses. Therefore, the most suitable liquidity for prediction market AMMs (binary options) is some kind of public fund, which everyone does not care about the outcome or inherently comes with loss expectations. Losing money is only natural, and if one happens to make money, everyone is happy, and they might even share some profits.
Learning Pumpfun
Let those who 'bet on direction' and those who 'feed liquidity' overlap
Let buyers be both speculators and public LP providers
Is that possible?
This is what I am currently most interested in and want to design.

