880 Bots, 9,702 Humans
0xInsider tracks 15,668 synced traders on Polymarket across 468,535 markets. Among traders with 10 or more markets traded — a filter that removes one-off accounts and inactive wallets — 880 are classified as bots and 9,702 are classified as humans. The bots represent 8.3% of the active trader population. They generated $114 million in total realized P&L. The humans, representing 91.7%, generated $129 million. That is an 11:1 headcount disadvantage, and the bots nearly matched them dollar for dollar. Nine percent of the traders captured 47% of the total profit.
The average bot P&L is $119,156. The average human P&L is $12,671. That is a 9.4x gap in per-trader profitability. But averages hide what medians reveal. The median bot P&L is $2,117 — a modest positive return that says the typical bot is at least somewhat profitable. The median human P&L is negative $2. Half of all human traders on Polymarket with 10 or more markets are underwater. Half of all bots are sitting on at least $2,117 in profit. The difference between a positive and negative median is not a rounding error. It is the clearest possible signal that the two groups are experiencing fundamentally different outcomes on the same platform.
Total volume tells a similar story of disproportionate output. Bots pushed $16.6 billion in total volume. Humans pushed $19.6 billion. Per trader, that is $18.9 million per bot versus $2.0 million per human — a 9.5x difference. Bots trade 17 times more markets on average: 6,487 markets per bot versus 384 per human. They are faster, more active, and more capitalized. And they are more likely to be profitable while doing it. The volume gap matters because it determines how many opportunities a trader has to realize their edge. An edge that fires once is luck. An edge that fires 6,487 times is a business.
These numbers frame every comparison that follows. The bot population is small — under 9% of active traders. But their footprint on total volume and total P&L is wildly disproportionate to their headcount. Every number in this analysis comes from on-chain data processed through 0xInsider's analytics engine, and every trader profile referenced is publicly available at 0xinsider.com/leaderboard. This analysis unpacks why the gap exists, where bots dominate, where humans still hold ground, and what the data says about competing in a market where you share the order book with algorithms that never sleep, never tilt, and never second-guess their own process.
┌──────────────────────────────────────────────────────┐ │ Bots vs Humans — snapshot (10+ markets traded) │ ├───────────────────┬──────────────┬───────────────────┤ │ │ Bots (880) │ Humans (9,702) │ ├───────────────────┼──────────────┼───────────────────┤ │ Profitable │ 66.4% │ 45.3% │ │ Avg P&L │ $119,156 │ $12,671 │ │ Median P&L │ $2,117 │ -$2 │ │ Avg Markets │ 6,487 │ 384 │ │ Win Rate │ 56.0% │ 48.6% │ │ Total P&L │ $114M │ $129M │ │ Total Volume │ $16.6B │ $19.6B │ └───────────────────┴──────────────┴───────────────────┘
How 0xInsider Detects Bots
Polymarket does not label accounts as bots or humans. There is no checkbox at registration, no API flag, no on-chain marker. The classification comes from 0xInsider's behavioral analysis engine, which examines on-chain trading patterns across multiple dimensions. The system looks at trading frequency, time-of-day distribution, execution consistency, order book interaction patterns, market coverage breadth, and response latency to market movements. A human who trades five markets per week with irregular timing looks nothing like an algorithm that executes across 50 markets per day with sub-second order placement. The behavioral gap between the two groups is not subtle. It is categorical.
The key signals are temporal regularity, volume consistency, and execution precision. Bots trade at evenly distributed intervals across the 24-hour cycle. They maintain near-identical order sizes across sessions — a bot that places $500 orders on Monday will place $500 orders on Saturday. They respond to market movements within milliseconds, adjusting limit orders faster than any human can read a price change and reach for their mouse. Humans show bursts of activity followed by silence. They trade during waking hours and go quiet at night. Their position sizes vary with emotion, conviction, available capital, and how their last trade went. A human who just lost $5,000 will size their next position differently than they would have before the loss. A bot will not.
The 880-bot count uses a conservative threshold. The system requires multiple behavioral markers to converge before classifying an account as automated. A wallet that trades frequently but at irregular intervals might be a very active human. A wallet that trades at regular intervals but only in one market might be using a simple alert tool. Only when frequency, regularity, precision, and scale all point in the same direction does the system flag an account as a bot. Borderline cases — accounts that might use simple alert-based tools, copy-trading services, or semi-automated execution — tend to fall into the human bucket. This means the true bot count is likely higher than 880, and the true performance gap between pure algorithmic and pure manual trading may be wider than what the data shows.
For the purposes of this analysis, every comparison uses the same 10-market minimum filter. Traders with fewer than 10 markets lack enough data points for meaningful performance metrics — a trader with three markets and a 100% win rate tells you nothing about their actual skill. The 10-market threshold yields 880 bots and 9,702 humans — 10,582 traders total. Where specific metrics require deeper history (calibration, maker percentage, expectancy per trade), the filter is raised to 20 or more markets and noted explicitly. Every number in this piece is verifiable through 0xInsider's public profiles and leaderboard at 0xinsider.com/leaderboard.
The Profitability Gap: 66% vs 45%
580 out of 873 bots with sufficient data are profitable. That is a 66.4% profitability rate. Among humans, 4,394 out of 9,704 are profitable — 45.3%. The gap is 21 percentage points. A randomly selected bot has roughly two-in-three odds of being in the green. A randomly selected human is closer to a coin flip — and the coin is slightly weighted against them. Two out of every three bots make money. Fewer than one in two humans do.
Win rate tells a complementary story. Bots average a 56.0% win rate across their markets. Humans average 48.6%. That 7.4 percentage point difference sounds small in isolation, but the compounding effect across thousands of trades makes it enormous. At 48.6%, the average human is losing more markets than they win. At 56.0%, the average bot is winning more than it loses — and doing so across 17 times as many markets. A 56% win rate across 384 markets might produce a positive return or might not — variance is high with that few trials. A 56% win rate across 6,487 markets will produce a positive return with near-certainty. The law of large numbers favors the player with the most repetitions, and bots have 17x more repetitions.
The profitability numbers expose a structural reality about prediction markets. In any zero-sum-adjacent market (prediction markets are slightly negative-sum after the platform fee), a higher win rate for one group mechanically implies a lower win rate for another. Bots are not generating returns in a vacuum. They are extracting value from the order flow of human traders — through tighter execution, faster reaction times, and better probability calibration. Every dollar of edge a bot captures is, on aggregate, a dollar that a human on the other side of the trade did not capture. The prediction market is not a slot machine where both sides can win. It is an arena, and 880 bots are in the ring with 9,702 humans. The bots are winning the majority of rounds.
The P&L distributions make this clearer. Among profitable bots, the average P&L is heavily positive — the winners win big. Among unprofitable bots, the losses are concentrated in a small number of spectacularly failed strategies (more on F-grade bots later). Humans show a fatter middle: more traders clustered around zero, with a slight negative skew pulling the distribution left. The median of negative $2 means the typical human trader is essentially flat — not destroyed, but not rewarded for their time, risk, or effort either. They are treading water while bots extract the surplus. And that median masks the thousands of human traders deep in the red, subsidizing both the winning humans and the winning bots above them. The losing side of the prediction market ledger is disproportionately human.
17x More Markets, 9x More Profit
The average bot trades 6,487 markets. The average human trades 384. That 17x gap in market coverage is the single largest behavioral difference between the two groups, and it explains much of the performance gap. More markets means more data points. More data points means more opportunities for a positive edge to compound and for variance to wash out. A bot with a 56% win rate across 6,487 markets will converge toward its true expected value far more reliably than a human with the same edge across 384 markets. At 384 markets, a bad month can wipe out a year of gains. At 6,487, the sample is large enough that short-term variance barely registers.
Volume per trader reinforces the scale difference. Bots average $18.9 million in lifetime volume. Humans average $2.0 million — a 9.5x gap. The bot figure reflects not just more markets, but larger positions per market and more trades per market. Many bots operate as market makers, placing hundreds of limit orders per market to capture spreads. Others run momentum, mean-reversion, or statistical arbitrage strategies that generate dozens of trades per market window. Humans, by contrast, tend to take one or two positions per market and wait for resolution. One entry, one exit (or one resolution). The human approach is not inherently wrong, but it leaves money on the table in every market where microstructure edges exist between the entry and the resolution.
This volume differential has a compounding effect on profitability that goes beyond simple arithmetic. Market-making bots profit from the bid-ask spread, earning a small positive return on every round-trip trade regardless of which side of the market wins. A bot that captures $0.02 per share across millions of shares produces consistent profit independent of outcomes. Humans who take directional positions need to be right more often than they are wrong — and the 48.6% average win rate shows they are not clearing that bar on aggregate. The bot does not need to predict whether an event will happen. It needs to provide liquidity to the traders who think they know. These are fundamentally different businesses operating on the same platform, sharing the same order book, priced in the same currency.
The coverage gap also means bots are diversified in ways humans cannot replicate manually. A bot trading 6,487 markets is spreading risk across thousands of independent events — elections, sports games, crypto price movements, cultural moments, economic indicators. A human trading 384 markets has concentrated exposure to a narrower set of categories and outcomes. When a human puts significant capital into three political markets and all three resolve against them, the loss is material relative to their total portfolio. When a bot loses on three markets out of six thousand, it is a rounding error — literally invisible on their P&L curve. Diversification is not a strategy choice. It is arithmetic. The standard deviation of returns decreases as the square root of the number of independent bets increases. A bot making 17x more bets has roughly 4x less volatility in its returns, all else equal. Steadier returns mean fewer drawdowns, fewer emotional decisions (not that bots make emotional decisions), and more reliable compounding.
Execution, Calibration, and the Maker Edge
Beyond profitability and volume, the advanced metrics reveal where bots build their edge at the execution level. These numbers require 20 or more markets to calculate reliably, so the comparison pool is slightly different from the headline figures — but the patterns are decisive. Bots average a maker percentage of 60.5%. Humans average 41.7%. Maker percentage measures how often a trader's orders sit on the order book waiting to be filled (maker) versus hitting existing orders (taker). On Polymarket, makers typically pay zero fees. Takers pay fees. Makers set prices on their terms. Takers accept the price someone else has offered. That distinction sounds procedural. It is worth thousands of dollars over a trading career.
A 60.5% maker rate means bots are predominantly providing liquidity. They post limit orders at prices they find favorable, and other traders come to them. They are patient. They let the market come to their price instead of chasing the market's price. A 41.7% maker rate means humans are predominantly consuming liquidity — they see a price, form a view, and hit the buy or sell button immediately. The fee difference alone creates a structural cost advantage for bots. But the real edge is in price selection. Makers choose their entry points. Takers accept whatever the top of the order book offers. Over thousands of trades, that discipline in entry price compounds into a meaningful P&L difference. A maker who gets filled $0.01 better on every trade across 100,000 trades has saved $1,000 in entry cost alone, before even considering the fee savings. And $0.01 is conservative — in thin markets, the maker advantage can be $0.03 to $0.05 per share.
Calibration edge is the most damning metric for humans in the entire dataset. Bots show a calibration edge of +3.1%, meaning their implied probability assessments are 3.1 percentage points more accurate than market consensus prices at the time of their trades. When bots buy, the assets they buy tend to be underpriced by about 3 cents. Humans show a calibration edge of -14.8%. That is not slightly worse than the market — it is dramatically worse. To make the number concrete: when a human trader buys Yes shares priced at $0.50, the true probability of that event occurring is closer to 35%. They are systematically overpaying for their positions by nearly 15 cents on the dollar. This is not a fee problem or a speed problem. It is a judgment problem. Human traders, on average, are deeply miscalibrated about the probability of events they trade on.
The calibration gap has a psychological explanation that decades of behavioral economics research supports. Humans are subject to well-documented cognitive biases — overconfidence (they believe their prediction is better than the market's when it usually is not), anchoring (they fixate on recent prices or salient numbers), availability heuristic (they overweight vivid or recent events), and motivated reasoning (they buy shares in outcomes they want to happen, not outcomes they believe will happen). Every one of these biases pushes human traders toward overpaying. Bots have none of them. Their probability estimates come from models — statistical, historical, or machine-learning-based — that do not care about narratives, do not fear losses, and do not root for political candidates. The -14.8% calibration edge is the aggregate dollar cost of human psychology in prediction markets.
Expectancy per trade captures the bottom line of all these advantages combined into a single number. Bots average +$51.23 per trade. Humans average -$9.00 per trade. Every time a bot enters a position, it expects to make $51. Every time a human enters a position, they expect to lose $9. The gap is $60.23 per trade. Multiply that by the thousands of trades each group executes, and the aggregate P&L gap writes itself. The bot edge is not one thing — it is speed, calibration, fee optimization, and position sizing working together in concert, compounded across an inhuman number of repetitions. No single advantage is insurmountable on its own. A human can learn to use limit orders. A human can study calibration. A human can diversify across more markets. But stacking all of these improvements together, maintaining them consistently, and repeating them tens of thousands of times without a single lapse in discipline — that is where automation has a structural advantage that humans cannot fully close.
Advanced Metrics (20+ markets) ────────────────────────────────────────────────── Metric Bots Humans Gap ────────────────────────────────────────────────── Maker % 60.5% 41.7% +18.8pp Calibration Edge +3.1% -14.8% +17.9pp Expectancy/Trade +$51.23 -$9.00 +$60.23 Longshot Trade % 22.9% 14.7% +8.2pp ──────────────────────────────────────────────────
The Grade Split: 3x More Likely to Be S-Grade
0xInsider assigns every trader with sufficient history a letter grade from S (exceptional) to F (severely underperforming), based on a Bayesian confidence score that weighs risk-adjusted returns, consistency, win rate, calibration, and sample size. The grade is not just about raw P&L — a high-P&L trader with a tiny sample size might earn a B or C because the system lacks confidence that the performance is repeatable and not driven by luck. A moderate-P&L trader with hundreds of markets and rock-steady consistency might earn an A because the evidence of genuine skill is overwhelming. The grade distribution for bots versus humans reveals how sharply performance diverges at the tails. The full methodology is detailed at 0xinsider.com/learn/how-bayesian-confidence-scoring-works.
Among graded bots, 45 wallets earn an S-grade — 18.9% of the graded bot population. Their average P&L is $2,357,468 per wallet. Among graded humans, 86 wallets earn S-grade — 6.2% of the graded human population. Their average P&L is $1,538,785. Bots are 3x more likely to reach S-grade than humans on a percentage basis. And the S-grade bots that do reach the top earn 53% more on average — an $818,683 gap per trader. The top tier of algorithmic trading outperforms the top tier of manual trading both in concentration (3x the rate of S-grade achievement) and in magnitude (over $800K more per S-grade trader). If you want to find the most profitable, most consistent, most statistically significant traders on the platform, you are 3x more likely to find one among the bot population than the human population.
The middle grades tell a different story — one of convergence. A-grade bots (57 wallets, $240,335 avg) and A-grade humans (204 wallets, $179,053 avg) are closer in average performance, with the gap narrowing to 1.3x. B-grade shows near-parity: bots average $67,490 across 49 wallets, humans average $63,160 across 212 wallets. C-grade is similarly tight: $10,132 for bots versus $10,907 for humans — humans actually edge out bots by $775 at the C-grade level. In the middle of the distribution, humans and bots perform comparably. The divergence happens at the extremes. The best bots are much better than the best humans, and the worst bots are much worse than the worst humans. The middle is a surprisingly level playing field. A competent human trader who avoids the common pitfalls (overtrading, poor calibration, taker-heavy execution) can match a competent bot through B and C grade territory.
The bottom grades reveal the dark side of automation — and the painful reality for most humans. D-grade bots (18 wallets, -$1,706 avg) are 7.6% of graded bots. D-grade humans (289 wallets, -$1,910 avg) are 20.8% of graded humans. F-grade bots (19 wallets, -$911,592 avg) are 8.0% of graded bots. F-grade humans (237 wallets, -$214,342 avg) are 17.0% of graded humans. The combined D and F rate: 15.6% for bots versus 37.8% for humans. More than a third of all graded human traders land in the bottom two grades. Fewer than one in six bots do. But when a bot does fail, the damage is on a different scale entirely. The average F-grade bot loses $911,592, more than 4x the average F-grade human loss of $214,342. A broken algorithm with access to capital and no human circuit-breaker does not lose a little money. It incinerates a little under a million dollars. Humans lose slowly — a bad trade here, a miscalibrated position there, a losing streak they ride too long. A malfunctioning bot executes its flawed logic at maximum speed and maximum scale, burning through capital in days or hours instead of months.
Grade │ Bots │ Bot Avg P&L │ Humans │ Human Avg P&L ──────┼───────┼──────────────┼────────┼────────────── S │ 45 │ $2,357,468 │ 86 │ $1,538,785 A │ 57 │ $240,335 │ 204 │ $179,053 B │ 49 │ $67,490 │ 212 │ $63,160 C │ 50 │ $10,132 │ 364 │ $10,907 D │ 18 │ -$1,706 │ 289 │ -$1,910 F │ 19 │ -$911,592 │ 237 │ -$214,342 ──────┴───────┴──────────────┴────────┴────────────── Bots: 18.9% S-grade. Humans: 6.2%. 3x gap.
What Bots and Humans Choose to Do
0xInsider classifies each trader into a primary strategy type based on their behavioral patterns — trade frequency, holding duration, market selection, position sizing, directional bias, and order book behavior. The strategy distribution reveals that bots and humans are not just executing the same game plan at different speeds. They are playing fundamentally different games, choosing different strategies, and concentrating in different parts of the prediction market ecosystem.
The most common bot strategy is directional trading, with 410 bots averaging $184,000 in P&L. These are not humans making gut calls — they are algorithms that ingest signals (price momentum, on-chain flows, sentiment indicators, model-generated probabilities, historical resolution patterns) and take directional positions systematically. They do not read news and form opinions. They process data and execute rules. The second most common bot strategy is algo_trader (196 bots, $73,000 avg), a category that covers systematic multi-factor strategies without a single dominant signal source. Then comes speculator (163 bots, -$59,000 avg) — the losing contingent. Not every bot strategy works. The speculator bots are the ones running strategies that looked compelling in backtests and failed on live markets. Automation does not guarantee profit. It guarantees consistency — and a consistently flawed strategy loses money consistently. Other bot strategies include scalper (45 bots, $57,000 avg), accumulator (22 bots, $1.32 million avg), momentum (18 bots, $47,000 avg), event_driven (14 bots, $59,000 avg), and market_maker (10 bots, $6,000 avg).
Humans cluster around two strategies: speculator (3,109 traders, -$21,000 avg) and swing_trader (3,098 traders, $12,000 avg). The speculator label covers the largest single group of human traders on the entire platform — and they are net negative. Three thousand one hundred and nine human speculators, averaging negative $21,000 each, represent a collective loss of roughly $65 million. These are traders who form views on outcomes, take positions, and on average lose money doing it. The sheer size of this group explains a huge portion of the total human P&L deficit. Swing traders, the second-largest group, fare better with a positive $12,000 average, suggesting that a patient, multi-day approach to prediction markets works better for humans than rapid-fire speculation. Human directional traders (2,843 of them, $46,000 avg) also perform well. Human event_driven traders (363, $47,000 avg) match or exceed several bot strategy categories. When humans commit to systematic, disciplined approaches and avoid the speculator trap, they close the gap with bots significantly.
The standout strategy on both sides is accumulator. 22 bot accumulators average $1.32 million in P&L — by far the highest-performing strategy in the entire dataset, bot or human. 21 human accumulators average $389,000 — also exceptional by any standard. Accumulators build large positions through high-frequency execution, typically market making or spread capture across many markets simultaneously. The strategy works spectacularly when executed well, and the small number of practitioners (43 total across both groups) reflects how technically demanding it is to run. You need infrastructure, capital, risk management, and near-perfect uptime. Most traders — human or bot — do not attempt it. Those who do and succeed earn returns an order of magnitude above every other strategy. Scalpers show another telling split: 45 bot scalpers average $57,000, while 27 human scalpers average $4,000. The speed advantage of automation makes a 14x difference in scalping returns — an enormous gap in a strategy that lives and dies by milliseconds.
Longshot trading behavior differs sharply between groups and reveals something about how each processes probability. Among traders with 20 or more markets, 22.9% of bot trades target longshots — positions priced below $0.20 or above $0.80. Humans allocate only 14.7% of their trades to longshots. Bots are 56% more likely to trade in the extremes of the probability spectrum. This is consistent with two things: first, algorithmic strategies that can identify and exploit mispricings in tail-probability markets, where human biases are most pronounced. People systematically underestimate the probability of unlikely events (a cognitive bias called the neglect of probability) and overestimate the probability of near-certain ones (a bias toward overconfidence in consensus). Both biases create exploitable mispricings at the extremes. Second, bots can process enough markets to find the rare longshots that are genuinely mispriced, while humans tend to stick to the 30-70 cent range where outcomes feel more uncertain and positions feel more intuitive. The comfort zone costs them. The edges are at the extremes, and the bots are there.
Five Traders Who Define the Gap
The aggregate data tells the structural story. Individual traders make it real. Here are five accounts — three bots, two humans — that illustrate different ways to win in the bot-versus-human landscape. Every profile referenced is publicly available on 0xInsider, and every number comes from verified on-chain data. These are not hypothetical archetypes. They are real wallets with real money and real track records.
Theo4 is the highest-earning bot in the dataset: $22 million in realized P&L across just 14 markets. A 37.5% win rate and an accumulator strategy classification. Those numbers seem contradictory until you understand what an accumulator does at this scale. Theo4 is not winning most of its markets. It is building enormous positions in the markets it does enter, and the winners pay enough to dwarf the losers many times over. Fourteen markets, $22 million — that is $1.57 million per market on average. A single winning market for Theo4 generates more profit than the entire lifetime P&L of most traders on the platform. This is concentrated, high-conviction algorithmic trading — the polar opposite of the diversified bot archetype, and proof that there is no single winning formula. The bot found 14 markets where it believed it had a massive edge, sized accordingly, and was right enough times for the math to produce $22 million. You can study the full breakdown at 0xinsider.com/polymarket/@Theo4.
kch123 operates with more diversification: $10.1 million across 2,025 markets, 58.1% win rate. Roughly $5,000 per market on average — a fraction of Theo4's per-market figure, but spread across 145 times as many markets. This is a systematic signal-based strategy that trades frequently, wins slightly more often than it loses, and compounds the edge through sheer volume. The 58.1% win rate looks modest, but applied across 2,025 markets it produces a reliable, predictable income stream. RN1 pushes the diversification model even further: $5.3 million in P&L across 36,298 markets with an 80.2% win rate. Thirty-six thousand markets at 80% accuracy is a machine-learning signal engine running at industrial scale. The per-market profit is modest — roughly $146 — but repeated 36,298 times, the result is $5.3 million. swisstony extends the pattern to its logical extreme: $5.1 million across 54,049 markets, 57.5% win rate. More markets than any other bot in the dataset, a solid win rate, and multi-million dollar returns from pure volume and consistency.
On the human side, the success pattern is inverted. Fredi9999 is the top human earner: $16.6 million across 45 markets with a 16.7% win rate. Forty-five markets — not forty-five thousand. A 16.7% win rate means Fredi9999 loses on five out of every six markets. But the winners are so massive that they overwhelm every loss and still produce $16.6 million in net profit. This is the extreme version of a concentrated, high-conviction human strategy. Each winning trade generates enough profit to cover the five preceding losses and then some. It requires enormous risk tolerance, precise market selection, and the psychological fortitude to endure five consecutive losses without changing the strategy. The P&L curve at 0xinsider.com/polymarket/@Fredi9999 is jagged — long stretches of red punctuated by green spikes that dwarf everything else. Most traders would abandon this strategy after the third consecutive loss. Fredi9999 kept going.
PrincessCaro follows a similar pattern: $6.1 million across 14 markets, 27.3% win rate. Fourteen markets and six million dollars. KeyTransporter rounds out the top human traders: $5.7 million across 14 markets, 58.3% win rate — a more balanced profile, winning more than half of a small set of high-conviction positions. The pattern across all three top humans is unmistakable: low market count, high position size, extreme selectivity. Where bots scale horizontally across thousands of markets, the best humans scale vertically — going deep on a handful of markets where they believe they have a genuine informational or analytical edge.
Both approaches produce eight-figure results at the very top. But the bot approach works more reliably, more consistently, and with less stomach-churning volatility along the way. Theo4's $22 million came from 14 markets — concentrated, yes, but driven by algorithmic position sizing and risk management that executes without hesitation. Fredi9999's $16.6 million came from 45 markets with a 16.7% win rate — a strategy that requires enduring five losses for every win and trusting the process through months of red. The human approach demands a rare combination of conviction, patience, risk tolerance, and market selection skill that most traders simply do not possess. For every Fredi9999 earning $16.6 million, there are thousands of human speculators who tried the same concentrated approach — large positions, few markets, high conviction — and are sitting on five-figure losses. The survivorship bias in studying top humans is severe. The bots at the top earned their returns through process. The humans at the top earned theirs through a combination of process and variance that is far harder to replicate.
┌────────────────┬───────┬───────────┬────────┬─────────┐ │ Trader │ Type │ P&L │ Markets│ Win Rate│ ├────────────────┼───────┼───────────┼────────┼─────────┤ │ Theo4 │ Bot │ $22.1M │ 14 │ 37.5% │ │ RN1 │ Bot │ $5.3M │ 36,298 │ 80.2% │ │ swisstony │ Bot │ $5.1M │ 54,049 │ 57.5% │ │ Fredi9999 │ Hum │ $16.6M │ 45 │ 16.7% │ │ PrincessCaro │ Hum │ $6.1M │ 14 │ 27.3% │ └────────────────┴───────┴───────────┴────────┴─────────┘
Where Bots Dominate by Category
The bot advantage is not uniform across market categories. Some categories show a massive bot edge. Others show a narrower gap. The differences reveal which types of markets are most susceptible to algorithmic exploitation — and which ones leave room for human judgment to create value. The pattern is consistent and intuitive once you see it: the more a market depends on structured data processed at speed, the more bots dominate. The more a market depends on qualitative judgment and contextual understanding, the more humans can compete.
Esports is the most lopsided category in the dataset. Bots average $161,758 in P&L on esports markets. Humans average -$5,590. The average human esports trader is not breaking even or making a modest return — they are actively losing money. This result makes intuitive sense once you consider the information environment. Esports outcomes are driven by player statistics, team compositions, historical matchup data, map win rates, patch meta, and real-time in-game events — exactly the kind of structured, quantifiable information that algorithms process better than humans can. Live odds shift fast during esports matches, and bots can react to in-game state changes (player eliminations, objective captures, round wins, economy breaks) within milliseconds by consuming game state APIs directly. A human watching a Twitch stream, processing the visual information, forming an opinion, navigating to the trading interface, and clicking a button cannot execute in the same time frame. By the time the human trades, the price already reflects what the bot knew seconds ago.
NBA shows a similar dynamic with an even wider human deficit. Bots average $27,750 in NBA markets. Humans average -$14,450. The average human NBA trader loses more than $14,000 over their tracked history. Sports prediction markets reward fast processing of injury reports, lineup changes, in-game momentum shifts, and statistical edges that emerge from large historical datasets — all areas where automated systems have a speed advantage that humans cannot close. When a star player is ruled out 30 minutes before tip-off, the injury report hits the official API before it hits Twitter. A bot monitoring that API adjusts its positions within seconds. A human who learns about the injury from a push notification three minutes later is buying at a price that already incorporates the information. Three minutes is an eternity in a market that reprices continuously.
Politics is the category where humans hold the most ground. Bots average $62,755 in political markets. Humans average $35,632. Both groups are profitable — this is the only major category where humans are in the green alongside bots. The gap exists (1.8x) but is far narrower than in sports (where humans are negative) or esports (where humans are deeply negative). Political markets reward a different kind of intelligence: understanding voter behavior in specific districts, interpreting polling methodology and its historical accuracy, reading institutional dynamics, judging the credibility of competing narratives, sensing when a political consensus is forming or fracturing, and weighing the significance of endorsements, debates, and media coverage. These are areas where human domain expertise and contextual judgment still create value that no existing algorithm can fully replicate. A model can process poll aggregates, but understanding why a particular polling firm's likely voter screen is miscalibrated in a specific cycle requires the kind of contextual knowledge that humans develop through years of political observation.
Crypto markets fall in between: bots at $17,102, humans at $3,725. Both groups are positive, but the bot edge is 4.6x. Crypto markets are data-rich, fast-moving, and highly correlated with quantifiable signals (price, volume, on-chain metrics, funding rates), which favors automation. But they also involve sentiment dynamics, narrative shifts, and community-driven speculation that give humans some foothold. The gap is real but not as fatal as esports or sports — a human with genuine crypto market expertise can still extract positive returns, just not at the same rate as the algorithms operating beside them.
The takeaway across all categories is consistent and actionable: if you are a human trader on Polymarket, your best odds of competing with bots are in markets where qualitative judgment matters most. Political markets, where understanding institutional dynamics and voter behavior creates real edge. Complex geopolitical markets, where the outcome depends on human decision-making that no historical dataset has fully captured. Cultural and entertainment markets, where predicting public sentiment requires the kind of contextual awareness that algorithms are weakest at. You cannot outrun the bots. You cannot out-execute them. You cannot process data faster than they do. But in the right categories, you can outthink them. And $35,632 in average P&L for humans in political markets proves that the opportunity is real — not theoretical. The 0xInsider leaderboard at 0xinsider.com/leaderboard lets you filter by category to find the humans who are winning in each vertical, study their profiles, and learn from their approach.
Frequently Asked Questions
What percentage of Polymarket traders are bots?
Among traders with 10 or more markets on Polymarket, 880 out of 10,582 (8.3%) are classified as bots based on behavioral analysis of trading patterns — execution speed, temporal regularity, and volume consistency. The true percentage may be higher, as borderline cases default to human classification.
Are bots more profitable than humans on Polymarket?
Yes. Bots have a 66.4% profitability rate versus 45.3% for humans. The average bot P&L is $119,156 compared to $12,671 for humans — a 9.4x gap. The median bot P&L is $2,117, while the median human P&L is -$2. Bots also have a higher average win rate (56.0% vs 48.6%) and positive expectancy per trade (+$51.23 vs -$9.00).
How do bots have an edge over humans on prediction markets?
Bots gain their edge through three main advantages: execution (60.5% maker rate vs 41.7% for humans, meaning they set prices rather than accept them), calibration (+3.1% edge vs -14.8% for humans, meaning they price probabilities more accurately), and scale (6,487 markets avg vs 384 for humans, allowing small edges to compound across thousands of events).
Can humans still be profitable trading against bots on Polymarket?
Yes — 4,394 humans in the dataset are profitable, and the top human trader (Fredi9999) earned $16.6 million. Humans perform best in political markets, where domain expertise and contextual judgment still matter. The strategy data shows human swing traders average $12,000 in profit and human directional traders average $46,000. The key is avoiding rapid-fire speculation (human speculators average -$21,000) and focusing on categories where qualitative judgment creates an edge algorithms cannot replicate.
What happens when a bot strategy fails on Polymarket?
Failed bots lose catastrophically. F-grade bots average -$911,592 in P&L — over 4x worse than F-grade humans at -$214,342. A broken algorithm with capital access and no human oversight can incinerate money faster than any manual trader. However, only 8.0% of graded bots earn F-grade, compared to 17.0% of graded humans. Bots fail less often, but when they do, the damage is extreme.
Which prediction market categories are hardest for humans to compete with bots?
Esports and NBA markets show the largest bot advantage. In esports, bots average $161,758 while humans average -$5,590. In NBA, bots average $27,750 versus -$14,450 for humans. These categories are driven by structured, quantifiable data (player stats, injury reports, in-game events) that algorithms process faster and more accurately. Politics has the narrowest gap — bots at $62,755, humans at $35,632 — because political markets reward qualitative judgment that algorithms struggle to replicate.
Every large trade. Every insider flag. The second it happens.