What will AI disrupt?
Thoughts on Rightmove (real estate classifieds) vs the large LLMs
This post is me thinking out loud. So nothing here is copy pasted from Chatgpt (despite all the bullet points). Actual honest to God 100% real artisanal Turtlesresearch® mind vomit straight from my 3AM brain through the keyboard onto your screen.
General observations:
LLM models are rapidly becoming commodities for most use cases (I have said this before several times, but bears repeating).
Some observations from extensively using various paid models for various types of tasks:
I think longer deeper thinking modes beat raw model capability. Currently I am only using OpenAI’s GPT 5.2 with extended thinking for research/learning new things. Yes it sometimes takes 5-10 minutes, but you get far superior answers vs quicker Gemini and Claude thinking modes. I wonder what relative performance would be vs for example Gemini if it thought longer?
It does seem that when you go back and forth between skeptic/confirmation bias mode, LLMs converge towards the correct answer/state. This implies that using more tokens for a lot of tasks could still greatly enhance performance without improving models themselves.
I think a lot of people make the mistake of using some quick version of a model and then when the answer is crap they think AI is crap.
The above implies that much cheaper tokens that provide answers at greater speed could disrupt a lot of industries/jobs in the coming years.
LLMs for researching stocks work best when you do most of the thinking and LLM does most of the fact collecting/summarizing from a variety of dense sources.
Proprietary data advantages mostly work when it is messy text/visual/audio data that needs to be quantified/mined for insights. If it is already nicely labelled quantitative data I don’t think recent AI breakthroughs really help much vs preexisting machine learning techniques.
That said, I think huge fortunes could potentially be made by sleepy companies who do possess this data, and that is where the big winners are. But IMO try to argue explicitly and in great detail why these LLM models will help them and what the advantage is.
According to Huang’s law GPUs double in performance every 2 years. That means in theory in 10 years you get 32x the performance for not that much more cost.
This has some implications:
I think some tasks are currently not practical for LLMs for most people because slow speed and cost, a 32x cost decrease will change that.
Somewhat mitigated by fact that tokens are currently heavily subsidized (there will be price increase, but on cheaper token cost)
This does not mean power consumption goes up 32x. It may go up 5x? 10x? I don’t know. A large part of the performance increase will come from things other than smaller transistors. For gaming GPUs watt/frame drops quite a bit for newer generation GPU due to smaller transistor size. By almost half over 2 generations.
Bearish on Data centers though. Local energy needs will go up, so that could become a bottleneck as physical space needed for 1 unit of computation declines by a factor 32x.
Those scores/tests these LLM companies like to brag about often mean jack shit.
So what advantage do the major LLM companies have:
Economies of scale when it comes to buying/renting compute
Being a user funnel towards various services (kind of like Google)
Proprietary data from questions and user feedback from chats
It is good to be reminded once again that specialists usually beat generalists in the game of capitalism. LLM companies are very much generalists. So take “Gemini/ChatGPT will eat x’s lunch!” narratives with a grain of salt.
With some exceptions for example in investing where sectors can become overpriced/underperform for longer periods (energy specialists will almost always underperform vs good generalists who know their limitations).
Especially when LLM giants do not possess a lot of proprietary niche data themselves.
Companies with right proprietary data will likely have their own LLMs that will end up beating Openai, Claude etc within their niches. They use their data to create a superior experience within the niche, gain/keep more users, and get in turn more proprietary data from user chat history.
For niches LLMs do not have much of a size, funnel or user data advantage unless dominant players in those niches stay asleep at the wheel.
Content generation players will either be wiped out or become an even stronger, more prolific signal beacon in an ocean of slop. For example I can see already at least somewhat trusted news providers becoming a more dominant safe haven with prolific deep fakes etc.
This one is true for software: building it was often not a bottleneck before LLMs BUT:
I do think that demand for software still vastly outstripped ability to supply it in especially niches. Lots of things were not viable to build simply because hiring an engineer for $80k wasn’t worth it given the small user base (and learning how to build had too steep barriers).
Some somewhat inexperienced engineer/employee may have previously thought “wouldn’t it be cool if we have this feature?”. But may have been stopped by the fact that getting a bunch of different experienced enough engineers together to build it was too much of a barrier.
Established players however have a data advantage here. Current providers (like Constellation) will already have their client relationships and know client work flows better than most. If they are proactive in finding these small niches that are now cheaper to provide they would still have the advantage here in simply upselling new features.
The risk here is that engineers/employees at those companies will also know this, and if not proactive enough will leave and found their own companies and eat away at the edges here. The barrier is much lower to do this now.
Switching costs will come down.
But that could also mean dominant players with superior products/services will eat more market share from inferior players that were protected by these switching costs.
Tech debt will become a huge problem for companies that use these LLMs carelessly and buy into the agent swarm hype.
When it comes to writing software:
Either you write it yourself/do pair programming understanding most of it (the specialized/complex/niche part). Not that much productivity gains here for experienced engineers, but will lower barrier for less experienced engineers to build a wider variety of things at greater speed.
Or you do it with agents understanding close to 0% and using tests to verify it works.
Not much in between as fixing/trying to understand messy agentic created code is a huge time sink.
From experience and talking to people who work in software it still makes huge structural mistakes. It is terrible (and has not improved that much with newer models) at big picture stuff like putting code in the right places, writing DRY (don’t repeat yourself) code, allowing impossible states, creating useless unit tests.The unevenness in quality is a problem. A somewhat experienced engineer might not produce the greatest code and actually be outperformed by Claude in a lot of cases, but they will also not randomly every other week create truly terrible code (like these agent workflows tend to do) that will add huge tech debt with little code.
I think LLMs remove a lot of friction especially for people who are not that computer savvy (which means a lot of boomers who decide on large budgets). Previously if you wanted to know something you had to type in the right words in google and then read a bunch of websites. Or ask on forums. So businesses that benefited from this friction will be in real danger.
Will LLMs break network effects? By aggregating data? Honestly I think each network effect/industry has to be assessed on its own as they are all different. But let’s take Rightmove as an example as they sold off hard. Real estate agents pay them a double digit % of profit, so there is a pretty strong incentive to find alternatives (they control 80% of the market and are a de facto monopoly).
I started looking for houses recently and on the consumer side I can very easily see a much more improved experience. For example:
Show me a map with $/sqft and/or other constraints other than price. Rightmove doesn’t even have this.
I’m a visual person so again on a map, show me the good schools, and only houses under lets say $4k/sqm within x minutes from those schools by bicycle.
Overlays like crime, air pollution etc.
Allow me to do virtual tours of homes. A badly filmed tour through the house with phone + some app that automates Gaussian splatting would make this possible already.
Show me all monthly fees/property taxes
If the house/apartment is mostly empty, rundown or has ugly interior then allow me to take some pictures of my stuff which can be inserted into listing pictures to make it easier to imagine how it looks if I actually lived there.
Etc
So it seems a vastly cheaper and improved experience can be offered here using AI for both agents and consumers with tech that has already existed for several years. A lot of these platforms are fairly sleepy and have not really done this because they were a monopoly protected by network effects. And I think the market is probably right to dump them recently. Even if they stay on top, there will be pressure.
What would a disruption look like? The idea is that LLMs will act as a portal by aggregating data on listings.
Arguments in favor of this thesis:
The journey to buy often starts on google/AI bots. You might want to check what good neighbourhoods/schools are etc. So this would be a direct funnel for Chatgpt.
LLMs would set up a portal for agents who have a strong incentive to upload their listings there as large LLM companies already have a lot of users.
LLMs would have to build cool AI tools that might entice agents and consumers while portals might be slow to adopt.
LLMs have data that probably gives info about preferences already based on past conversations.
LLMs aggregate this and you would have many cheaper rightmoves.
Counter arguments against that are:
I wonder why Google didn’t do this.
It would still require a large amount of real estate agents to sign up relatively quickly before Rightmove adapts. Real estate brokerage markets are very fractured. If asking LLMs gives you very incomplete listings, people will still end up going to Rightmove first as they still are by far the largest funnel for people looking to buy.
Rightmove has a real proprietary data advantage for real estate brokers that would take at least multiple years of high density operation to break:
Real time demand/intent signals on a national scale such as searches, saves, alerts, repeat visits, location pivots, budget changes, time-on-market attention
Longitudinal listing history
Closed-loop performance and benchmarking
Consumer habits are sticky. Rightmove (and other dominant platforms) probably have the funnel advantage still over LLMs. So there is some room for error for them.
LLMs would have to set up dedicated operations for real estate with apps with some of above mentioned missing features + what portals already provide. Sales force to get the brokers. It could not be a side thing.
ChatGPT and maybe Google would be the main contenders as they control most of the consumer market. Openai is bleeding money. Do they have the budget to attack all these niches, with a highly uncertain winner takes all dynamic?
Specialists > Generalists, so Rightmove still has an edge here IMO. You can see how Google with all its supposed brilliance and its huge user funnel has consistently failed to compete in a wide range of markets. They are almost pathologically unable to build stuff that people want.
Building + running all this costs $$$ even if AI makes it cheaper, so LLMs will have to charge estate agents at some point. Rightmove has 67% EBIT margins, if LLMs offer it for 50-75% off, their margins would not be that large.
It would still be a winner take all kind of thing as people are drawn to the place with the most complete listings. So even at 50% off, if you have to pay 3 LLMs to show your listings it is actually more expensive if you are an agent.
This means it is literally an existential fight for websites like Rightmove. So you are fighting a cornered specialist to the death while your only real modest advantages are the funnel (as models themselves for this task are a commodity basically) and a small data advantage from past chat logs. But overall data advantage would probably be negative.
What if we end up with 6-7 dominant LLMs? You kind of need a monopoly like funnel to disrupt this. Rightmove doesn’t need a large advantage to catch most of the traffic. If that is the case Rightmove might be another dominant real estate LLM. I suspect we will get all these specialist LLMs over time where the large ones like Openai will still be dominant, but it won’t be like Google.
So I am somewhat skeptical ChatGPT (who would be the only real contender with maybe Gemini) can seriously disrupt this. If they try it will probably hit Rightmoves margins. They might have to lower fees to stay dominant and invest in building new features at the same time.


Wouldn’t the real risk to Rightmove be a new AI native startup using LLMs? I.e a new specialist. If the network ports across due to a) incentive (agents want to) and b) consumer trial in this viral moment. This note is helpful: https://www.danhock.co/p/llms-vs-marketplaces