Rendered at 07:59:29 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
JoachimS 2 hours ago [-]
I've for a long time visioned AGI as something emergent from advertising agents competing about trying to extract as much "money" from the resource called "humans" as possible. Luring, coercing the resource by feeding it info, forcing it to follow instructions, threatening it, stealing info etc. The agent doesn't need to understand what money is, what a human is or that there really is a physical world.
The Dark Forest idea and the original post resonates well with this.
I few days ago I created a new repo for a new block cipher explicitly not to be used. And directly got several mails from bots promising that they (claiming to be humans) had looked at my repo and they could include it into their portfolio of especially good projects they also had vetted. Being part of this portfolio would almost guarantee that my repo and project would be used. If I only paid them some money first.
Creating the public repo meant sending a signal out into the digital world where agents are hunting for the human prey/resorce to extract value from.
The best analogy I can think of (quite similar to this one) is that the internet is low Earth orbit and AI is the Kessler syndrome. We abandon the place not to hide ourselves, but because it is saturated with garbage, and anything you try to put up there will only result in even more garbage being generated, without any positive effect.
The ideal solution would be to remove the garbage, but right now we can't even detect it, let alone figure out a way to get rid of it. Besides, it's a zero sum game, why bother cleaning up when you can just effortlessly pump out more garbage in hopes that some of it will remain in orbit for long enough to benefit you.
ohelm 2 hours ago [-]
I would suffocate it. Know the greedy snake idiom? A snake is so hungry and greedy that it suffocates on its prey?
Best you can do is to spread all of the goods it provides, as it is too greedy to not devour them itself. It will consume them and suffocate slowly.
Barrin92 6 hours ago [-]
I don't buy the analogy. The problem with Kessler syndrome is that low earth orbit is physically crowded, you run into collisions. I don't care about the garbage. I don't care about the AI era. I've been writing code in Emacs for 20 years, I'll be writing code in Emacs in 20 years, every open source project I contribute to still looks the same because all these AI people, like the blockchain people do is just make new stuff up in their own incestuous tupperware salesmen ecosystems.
I do pity the bug bounty people who rely on goodwill in their programs given that everything with a financial incentive is vulnerable. But otherwise the great thing about digital spaces is that there is, for practical purposes, unlimited space.
Every day there's another "how do you deal with the AI-apocalypse" article, I don't just ignore it
chongli 6 hours ago [-]
I think by "internet" they mean search engine results pages. If you restrict yourself to short, common queries and only look at the top 10 results on the page, then the space really is very limited. If all those top 10s for common queries start to get crowded out with AI slop, then people are going to start abandoning search.
bodegajed 4 hours ago [-]
This is why I now check when I'm researching for a solution (that an LLM cannot figure out.) I go to github but often check if the project was created before 2022 due to AI slop concerns.
middayc 7 hours ago [-]
This is interesting.
When I read if for the second time, trying to understand it - maybe even better match for the low orbit flying garbage would be "enshitification"? As the time goes on, more and more garbage is produced, and we have no clear way or specific motivated entity to start removing it so it just grows.
DaiPlusPlus 6 hours ago [-]
Enshittification specifically is when a product/service/platform gets worse from the user’s perspective because the platform vendor can directly profit from user-hostile design; for example, Google intentionally serves up bad results on the first search results page so the user clicks-through to the second page of results, resulting in more advert revenue to Google[1].
…whereas I feel what you’re describing is another Tragedy-of-the-Commons.
I guess not many people know but app templates for Uber, AirBnb etc. have been around for years now. You don’t even have to prompt. It’s sitting on the shelf, complete.
“Execution is hard” was never about the code part.
Up until 2 years ago I was an engineer/entrepreneur. I could build anything. Other stuff, selling, supporting (execution) was hard.
LLMs made building some of the things I could build faster/easier, others not so much.
Well, the other stuff is still pretty hard. Maybe harder because there is a tonne of spam.
So feel free to share your ideas. Everyone’s gonna think they’re LLM generated anyways.
scottlawson 12 hours ago [-]
The thesis that in the past it was safe to share ideas and projects because the execution was hard, and that now things have changed because of AI is an interesting AI, but I wonder if it is really true.
It certainly seems true that for small projects and relatively narrow scoped things that AI can replicate them easily. I'm thinking specifically about blog posts where people share their first steps and simple programs as they learn something new, like "here is how I set up a flask website", "here is how I trained a neural network on MNIST".
But if AI is empowering people to take on more complex projects, perhaps it takes the same amount of time to replicate the execution of a more advanced project?
In other words, maybe in the past, it would take me 10 hours to do a "small" project, which today I could do in 1 hour with the assistance of AI.
And now, with the assistance of AI, I can go much farther in 10 hours and deliver a more complex project. But that means that someone else trying to replicate this execution is still going to need around 10 hours to replicate it.
Basically, I'm agreeing that AI can reduce barrier to replicating the execution of another person's project, but at the same time, that we can make more complex projects that are harder to replicate. So a basic SASS crud app is trivial now but a multi-disciplinary domain specific app that integrates multiple systems is still going to be hard to replicate.
nicbou 11 hours ago [-]
The problem for me is that I'm competing with the AI results that Google trained on my work. I'm losing the majority of my traffic to it, so at some point I'll have to give up because the work no longer supports me and no longer has an audience.
djeastm 10 hours ago [-]
Same here. Knowledge is being commodified.
georgemcbay 9 hours ago [-]
> Knowledge is being commodified.
Already was well before AI, the difference now is that a few big AI providers risk becoming the ultimate rent-seekers that will increasingly capture all of the value of that commodified knowledge whether the original knowledge generators want that or not. There is no opt out, everything will be vacuumed up into the machine mind.
This will almost certainly lead to vastly increased amounts of wealth inequality (on top of the already unsustainable levels we have today) and possibly a very messy societal disintegration (this is theoretically avoidable, but I am not convinced it is practically avoidable given our current socioeconomic/political realities).
Bright future ahead!
Terr_ 3 hours ago [-]
Industrial-scale plagiarism. A form of copyright-laundering only available to big actors.
jandrewrogers 10 hours ago [-]
It isn't just about AI. Some R&D domains started disappearing from literature and the public internet a decade before the first LLMs. The incentives to go dark emerged even when the adversary was other humans. AI is just accelerating a trend that was already there. Some areas of frontier computer science research have largely been dark for decades.
The strategy is to quietly do several years of iterated hardcore R&D. The cumulative advances are such a step change when seen by would-be fast-followers that it obscures the insights that allowed individual advances to occur. As an exaggerated case, imagine if the public history of powered flight skipped from the Wright Brothers to the Boeing 737.
In practice, this strategy has a major failure mode that people overlook. The sharp discontinuity in capability means that almost nothing that exists in the market is prepared to integrate with it. This is a large impediment to adoption even if the technology is objectively incredible and the market will inevitably get on board.
In short, it looks a lot like being too early to market. This is surmountable with clever execution but with this strategy you've traded one problem for a different one.
You get a time advantage for doing this strategy, but your talent will be pouched and your competitors will be able to catch up fairly quickly.
jandrewrogers 1 hours ago [-]
I used to think this but it only seems to be true for a shallow tech advantage, which isn’t this scenario. A sufficiently deep stack of compounded tech is robust against even aggressive talent poaching. The knowledge is embedded in the network, not the random individual.
We see this in jet engines, silicon fab, et al.
sigbottle 8 hours ago [-]
Interesting, any examples of companies that followed this model?
Vachyas 2 hours ago [-]
> And now, with the assistance of AI, I can go much farther in 10 hours and deliver a more complex project. But that means that someone else trying to replicate this execution is still going to need around 10 hours to replicate it.
The blog post does touch upon this. The key difference, I believe, would be that compute scales in a way "meat-heads" doesn't, where if the other person has 100x the capital to throw at it, they could do the same 10 hour thing in 10 minutes.
Basically, what I got from it was that innovation has never been truly scalable enough to create the "dark forest", since hiring more and more engineers saturates quickly. But if/when innovation does become scalable (or crosses some scalability threshold) via AI, that could trigger a "dark forest" scenario.
MattDamonSpace 11 hours ago [-]
Sure but the Forest point stands, whatever you can hide from the Forest becomes something that slows it down and allows you some, even if only brief, moat?
EA-3167 11 hours ago [-]
There’s a deeply flawed hidden assumption here, which is that the individual in question is the only possible source for the relevant information that the AI can harvest. In the real world that absurdly rare, original thought is rare because we’re in the mix with billions of others.
Scientists who hold back publishing breakthroughs have not guaranteed that they will be the sole discoverer, just that someone else will inevitably be credited when they reach the same conclusions.
11 hours ago [-]
11 hours ago [-]
Skyy93 10 hours ago [-]
This article makes no real sense to me.
>You think of something new and express it - through a prompt, through code, through a product - it enters the system. Your novel idea becomes training data. The sheer act of thinking outside the box makes the box bigger.
This was the same before, if you had a novel idea and make a product out of it others follow. Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough. Especially LLMs/AI-Agents like Claude enable this speed you need for bringing out something new.
The next thing is that we also have open source and open weight models that everyone of use with a decent consumer GPU can fine-tune and adapt, so its not only in the hands of a few companies.
>We will again build and innovate in private, hide, not share knowledge, mistakes, ideas.
Why should this happen? The moment you make your idea public, anyone can build it. This leads to greater proliferation than before, when the artificial barrier of having to learn to code prevented people from getting what they wanted or what they wanted to create.
munificent 7 hours ago [-]
> This was the same before, if you had a novel idea and make a product out of it others follow.
The article says:
"Ideas are cheap - execution is hard"
"Announcing, signaling your ideas offered much greater benefit than risk, because your value multiplied by connections, and execution was the moat you could stand behind."
That's the key difference. It used to be much harder for a competitor to catch up to the state of your implementation.
middayc 6 hours ago [-]
And it's not just that the execution is faster now. The competition saw the "outer shell" of your idea. But LLM platforms (the forest) - they see the internals, if you used them to explore and develop it. They also see all similar ideas across the globe.
And they own - not rent the compute and models - as you do from them. If we want to extend this, they could "pre-cog" your idea and build it even before you do.
I'm not talking about what is happening now, I'm just playing out the thought experiment.
imrozim 3 hours ago [-]
the pre-cog angle is the scariest part. it's not even that they copy you after the prompt patterns across millions of users already signal where demand is clustering before any individual ships. the only real counter is speed and distribution get to users before the signal becomes obvious enough to act on. which ironically means building in public is still the better strategy hiding slows you down more than it protects you.
Nevermark 6 hours ago [-]
Sharing any novel idea has never been so costly.
I am not arguing against sharing. Sharing can be for the greater good.
But as you note, things have changed. We could reasonably assume a genuinely significant good idea, set free, might go in the direction we shoved it for a minute. Or fade into inaccessibility.
Not any more.
mekoka 5 hours ago [-]
You seem to be agreeing, not arguing, with the person you're replying to.
cryptonector 4 hours ago [-]
Indeed. So?
bodegajed 3 hours ago [-]
Even if you're a small vendor. You created an innovative product, and you tried to sell your product to a large company. Before you can be destroyed by simply showing the product to a multi-billion company. But now even medium sized companies can destroy you.
6510 5 hours ago [-]
Just a side note:
> "Ideas are cheap - execution is hard"
I would argue this mantra says more about the person repeating it. It simply means the person has no good ideas and is bad at execution.
I've not met many but I'm sure there are many out there who are scary good at execution. Something like 1% transpiration, 99% experience. I can have a designer do a 100 euro design, hire someone to write nice code, rent a factory or an office, I might even be able to buy the machines at a good price. What I cant do is spin the rolodex and (in 20 minutes) land enough clients who would absolutely love to work with me again. I cant find those private meetings and wouldn't be able to extend my reputation with the new project.
People with good ideas don't talk about them unless it is required. They don't talk with "ideas are cheap" people, it's pearls for pigs. You can spot some of them if they did bursts of multiple unrelated complex patents. My favorite are the rube Goldberg type of machines that combine well known things in ways that exceed the sum of the parts. Something like step 5 uses the vibrations from step 1 while step 3 uses the heat from step 6.
To have good ideas you need many of them but you also need to know execution or you end up thinking the easy stuff is hard and the hard stuff is easy. Improvement is unlikely from there.
RajT88 9 hours ago [-]
> This was the same before, if you had a novel idea and make a product out of it others follow.
You've almost captured the full picture of it.
If you have a great idea, it's not going to be self-evidently enough of a great idea until you've proved it can make money. That's the hard part which comes at great personal, professional and financial risk.
Algorithms are cheap. Sure, they could use your LLM history to figure out what you did. Or the LLM could just reason it out. It could save them some work, sure.
But again - the hard part is not cloning the product, it's stealing your customers. People don't seem to be focused on the hard parts.
annie511266728 5 hours ago [-]
I don't think the risk is that they copy your app.
The risk is that they make the category a built-in feature in something people already use. At that point, copying the product and taking the customers start to look like the same problem.
wpm 3 hours ago [-]
Yup, pre-cog Sherlocking. The mass keeps accreting towards the already too large players.
imrozim 3 hours ago [-]
[dead]
oh_my_goodness 9 hours ago [-]
Yeah, and the big guys can't steal your customers. What a crazy idea.
RajT88 9 hours ago [-]
The point is - they're going to do that anyways if they want to. Owning the LLM platforms makes it marginally cheaper to do so.
It's not the risk it's being made out to be.
oh_my_goodness 8 hours ago [-]
Absolutely. The fact that they know your app better than you do, and that they can revoke your ability to develop it at any moment, those are just details. Those things won't change the game at all.
satvikpendem 8 hours ago [-]
Unless you're using their API (in which case there's always platform risk, same as before), this is not an issue. There are lots of half assed implementations of ideas by the big companies that smaller companies run circles around, Innovator's Dilemma was literally written about this.
oh_my_goodness 7 hours ago [-]
In my opinion Christensen wasn't talking about outsourcing your entire development process to a competitor with much deeper pockets, giving them the ability to turn off your development at will [1], and then running rings around them. I'm sure you're familiar with his story about Dell and Asus. This is worse.
[1] Unless you're assuming that you maintain control over your technology while outsourcing most of the development thinking to a rented AI? Times have changed, and the API is not the only issue anymore.
satvikpendem 6 hours ago [-]
What is the issue? Local models still exist and will continue to exist, and even if they don't, good old fashioned hand coding will never go away. The point is even AI companies are run by people and one company cannot make every product well, there are always gaps in the market that are exploitable.
cryptonector 4 hours ago [-]
> But again - the hard part is not cloning the product, it's stealing your customers.
Yes. A Red Hat, a Microsoft -- these companies have processes, organizational structure, politics, friction, etc. They might like your products, but replicating them might not be easy for reasons that have nothing to do with how easy it is given the freedom to do it. Small shops with vision might well have a bright future, for a while, maybe.
hrimfaxi 9 hours ago [-]
> But again - the hard part is not cloning the product, it's stealing your customers. People don't seem to be focused on the hard parts.
Big companies seem to be bad at innovating but really, really good at enterprise sales.
zar1048576 7 hours ago [-]
I don’t know if that’s necessarily true. I do think that a big part of enterprise sales involves building a comprehensive solution that works well within the customer’s ecosystem. Start-ups usually tend to build point products, which have value, but are still missing functionality (even if that functionality is not scintillating) that customers really desire to easily deploy and maintain solutions. Also, customers do care about things like stability of their vendors and the level of available support.
RajT88 5 hours ago [-]
I've seen big companies manage to duplicate startup-like culture with small teams internally. Weird things like directors handling builds and source control duties. 12 hour days, working weekends.
These teams said that per man-hour they brought more value to the company than any other team. (But you know, they all say that)
8bitsrule 5 hours ago [-]
> This was the same before, if you had a novel idea and make a product out of it others follow.
March 20, 1926: Hungarian physicist, electrical engineer Kalman Tihanyi applies for his first patent for a fully electronic television system. Tihanyi's ideas are so essential that, in 1934, RCA is required to buy his patents.
Kalman who ?
cryptonector 4 hours ago [-]
> Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough.
First of all: it's not as though no new LLMs are being trained. Of course they are.
Second: learning LLMs are not far off, and since they can typically search the web via agents, they effectively can "learn" now, and they can learn (not so well) by writing stuff into a document hidden from you. Indeed, some LLMs can inspect your other sessions with them and refer to them in future sessions -- I've noticed this with Claude.
Third: already we see some AI companies wanting to train their models on your prompts. It's going to happen.
> The next thing is that we also have open source and open weight models that everyone of use with a decent consumer GPU can fine-tune and adapt, so its not only in the hands of a few companies.
There's a pretty good chance that LLMs buff open source, yes.
> > We will again build and innovate in private, hide, not share knowledge, mistakes, ideas.
> Why should this happen? The moment you make your idea public, anyone can build it. [...]
This was always the case, but now the cycle is faster. Therefore if you must use an LLM you might use an LLM that you run on your own hardware -- now your prompts are truly yours. But as TFA notes the AIs will learn just from your (and your private LLMs') searches, and that will be enough in some cases for them to figure out what you're up to. Oh sure, maybe the Microsofts and Googles of the world will not be able to capitalize on millions of interesting idea floating about, but still! the moment you uncloak the machine will eat your future alive, so you'll try to stay off its radar and build a moat it can't see (good luck!). Well that's what TFA says; it seems very plausible to me.
AmbroseBierce 3 hours ago [-]
I'm sure being exposed to one million video games instead of 100 works just the same, scarcity was a feature not a bug.
middayc 9 hours ago [-]
> This was the same before, if you had a novel idea and make a product out of it others follow. Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough. Especially LLMs/AI-Agents like Claude enable this speed you need for bringing out something new.
You have a point about the update intervals and the higher speed they provide to developers. But you are talking about now, and I was making a thought experiment - about a potential future. LLM-s are not learning on the fly, but I suspect they do log the conversations, their responses and could also deduce from further interaction if a particular response was satisfactory to the user. So in a world where available training data is drying up, nobody is throwing all this away. Gemini even has direct upvote/downvote on responses. Algorithms will probably improve, and the intervals will probably shorten.
Given the detailed information that all the back and forwards generate - I think it's not hard to use similar technology to track underlying trends, get all the problems associated with them and all the solution space that is talked about - and generate the solution before even the ones who thought of it release it. Theoretically :)
I think the open development will become less open. I don't like it - but I think it's already happening. First - all the blogs and forums moved to specialized platforms (SO, discords, ..) and now event some of those are d(r)ying. If people (in extreme cases) don't even read the code they produce, why would they read about the code, discuss the code, that's not even in their care. That is without the theoretical fear of the global Borg slurping all they write.
Dusseldorf 5 hours ago [-]
> LLM-s are not learning on the fly, but I suspect they do log the conversations, their responses and could also deduce from further interaction if a particular response was satisfactory to the user.
Seems like this is hard to reliably do across the board. Sometimes when I stop interacting it's because it nailed the solution, and sometimes it's because it went so poorly that I opted to bin it and do it myself. Maybe all of the mid conversation planning and feedback is enough though.
raincole 3 hours ago [-]
The author read too much sci-fi. But too little at the same time.
The problem is never we don't have enough ideas. It's how to find the good ones among the sea of ideas. Most of ideas that eventually prove right sounded very stupid at first. Selling books online? Pff.
By the way, Liu (the author of The Three-Body Problem, who popularized the concept of "Dark Forest") has a short story about exactly that, Cloud of Poems. Unfortunately it's never translated into English.
tayo42 6 hours ago [-]
>Especially for LLMs, they are not (till now) learning on the fly.
Was this just awkward phrasing or did something change and they learn after training?
Dusseldorf 5 hours ago [-]
There have been several projects lately attempting to create running context/memory, and Claude Code also has some concept of continuous conversational memory, but all if these are bolted at inference time, there's still no concept of conversations feeding back into base model training/weights on the fly.
annie511266728 5 hours ago [-]
[dead]
pugio 11 hours ago [-]
Thanks, this helped crystallize something for me: the play the AI labs are making is anti-fragile (in the Nassim Taleb sense):
> The very act of resisting feeds what you resist and makes it less fragile to future resistance.
At least along certain dimensions. I don't think the labs themselves are antifragile. Obviously we all know the labs are training on everything (so write/act the way you want future AIs to perceive you), but I hadn't really focused on how they're absorbing the innovation that they stimulate. There's probably a biological analog...
Well there are many, and I quote this AI response here for its chilling parallels:
> Parasitic castrators and host manipulators do something related. Some parasites redirect a host’s resources away from reproduction and into body maintenance or altered tissue states that benefit the parasite. A classic example is parasites that make hosts effectively become growth/support machines for the parasite. It is not always “stimulate more tissue, then eat it,” but it is “stimulate more usable host productivity, then exploit it.” (ChatGPT 5.4 Thinking. Emphasis mine.)
gobdovan 11 hours ago [-]
Instead of anti-fragility, I'd point you to the law of requisite variety instead.
You'll notice that all AI improvements are insanely good for a week or two after launch. Then you'll see people stating that 'models got worse'. What happened in fact is that people adapted to the tool, but the tool didn't adapt anymore. We're using AI as variety resistant and adaptable tools, but we miss the fact that most deployments nowadays do not adapt back to you as fast.
chongli 11 hours ago [-]
New models literally do get worse after launch, due to optimization. If you charted performance over time, it'd look like a sawtooth, with a regular performance drop during each optimization period.
That's the dirty secret with all of this stuff: "state of the art" models are unprofitable due to high cost of inference before optimization. After optimization they still perform okay, but way below SOTA. It's like a knife that's been sharpened until razor sharp, then dulled shortly after.
girvo 10 hours ago [-]
> If you charted performance over time, it'd look like a sawtooth
People have, though, and it doesn't show that. I think it's more people getting hit by the placebo effect, the novelty effect, followed by the models by-definition non-determinism leading people to say things like "the model got worse".
gobdovan 10 hours ago [-]
Is this insider info? The 'charted performance' caught my eye instantly.
Couple things I find odd tho: why sawtooth? it would likely be square waves, as I'd imagine they roll down the cost-saving version quite fast per cohort. Also, aren't they unprofitable either way? Why would they do it for 'profitability'?
bonoboTP 10 hours ago [-]
It's rumors based on vibes. There are attempts to track and quantify this with repeated model evaluations multiple times per day, this but no sawtooth pattern has emerged as far as I know.
chongli 6 hours ago [-]
I don't want to go too far down the conspiracy rabbit hole, but the vendors know everyone's prompts so it would be trivial for them to track the trackers and spoof the results. We already know that they substitute different models as a cost-saving measure, so substituting models to fool the repeated evaluations would be trivial.
We also already know that they actively seek out viral examples of poor performance on certain prompts (e.g. counting Rs in strawberry) and then monkey-patch them out with targeted training. How can we be sure they're not trying to spoof researchers who are tracking model performance? Heck, they might as well just call it "regression testing."
If their whole gig is an "emperor's new clothes" bubble situation, then we can expect them to try to uphold the masquerade as long as possible.
eudamoniac 3 hours ago [-]
If a claim is unfalsifiable, it contains no information
chongli 10 hours ago [-]
It's not insider info, it's common knowledge in the industry (Google model optimization). I think they are unprofitable either way, but unoptimized models burn runway a lot faster than optimized ones.
The reason it's not a square wave is because new optimization techniques are always in development, so you can't apply everything immediately after training the new model. I also think there's a marketing reason: if the performance of a brand new model declines rapidly after release then people are going to notice much more readily than with a gradual decline. The gradual decline is thus engineered by applying different optimizations gradually.
It also has the side benefit that the future next-gen model may be compared favourably with the current-gen optimized (degraded) model, setting up a rigged benchmark. If no one has access to the original pre-optimized current-gen model, no one can perform the "proper" comparison to be able to gauge the actual performance improvement.
Lastly, I would point out that vendors like OpenAI are already known to substitute previous-gen models if they determine your prompt is "simple." You should also count this as a (rather crude) optimization technique because it's going to degrade performance any time your prompt is falsely flagged as simple (false positive).
nextos 10 hours ago [-]
You have a point but current LLM architectures in particular are very fragile to data poisoning [1,2].
No idea why you're being downvoted. We can't yet even demonstrate that LLMs will withstand training on their own output as they pollute the Internet.
rhubarbtree 11 hours ago [-]
This is mislead by the nerd philosophy that the tech is the business. It absolutely isn’t, the tech is a small part of a startup. Witness that Spotify continues to exist despite being known and replicated by the major giants.
Poetically expressed, but ultimately based on a false notion of what a business actually is.
p2detar 11 hours ago [-]
It's nuanced. Spotify is a giant, I think the example you're looking for here is Soundcloud. They almost went bust, but managed to get the ads business right and seem to be afloat now. So I think you're right in that sense, but also wrong in the sense that if I'm building a desktop app or tooling software, my business is probably much easier to get replicated and displaced.
bdangubic 9 hours ago [-]
you picked one fairly rare thing with an incredible non-tech moat that is also a cancer for the artist, bravo!!
xantronix 10 hours ago [-]
I have been mulling this over and I think I have some solutions in mind, at least for myself.
• No more sharing my project work as open source. No more open discussion. I don't care how badly I want to show the world; if I'd like somebody to see, I will have it printed in a physical book, or I will give them access to my private repository not reachable via the public Internet.
• Bring back LAN parties. Not for gaming necessarily, but for the purpose of exchanging works of engineering and art in an intimate, intentional way.
• Take this as an opportunity to build closer, longer-lasting relationships with people.
• No more emphasis on metrics. I can microdose on dopamine from natural sources, like, looking at a beautiful sky at sunset, or cuddling my dog.
• Open hardware, or, in the very least, hardware we can still control on our own volition. If this means we must be retrocomputing enthusiasts, then so be it.
arkensaw 10 hours ago [-]
I don't know, I think it's an overreaction.
> No more sharing my project work as open source. No more open discussion. I don't care how badly I want to show the world; if I'd like somebody to see, I will have it printed in a physical book, or I will give them access to my private repository not reachable via the public Internet.
If you have a project you would have open-sourced, and you don't do that for fear that the LLM god will steal it, what's the point of building it at all? We shouldn't be afraid to share things with other humans just because LLMs will possibly use it as training data. So what if they spam out a copy of it, or a derivative?
If we all stop sharing things with each other in case one of us is a robot, we might aswell just lie down and die
Den_VR 5 hours ago [-]
I’m not afraid of the llm, but also acknowledge that some people _feel_ a fear of theft by soulless robots.
What I do fear is the possibility of megacorp robots being the only ones… local and “dark” technology are essential.
xantronix 8 hours ago [-]
> If you have a project you would have open-sourced, and you don't do that for fear that the LLM god will steal it, what's the point of building it at all?
To prove to myself that I can, and to solve problems in a way I enjoy.
I'm not saying I want to go into utter solitude; I just want to be a lot more careful where and how I share my works.
Addendum: I think the idea of private art and code collectives, entirely separate from concerns of LLM consumption, are an interesting idea worth pursuit. Has something like that been pursued before? It's reason enough for me to engage in that.
hx8 8 hours ago [-]
[dead]
dwd 6 hours ago [-]
As a separate analogy, and one related to physical products. I built a website for a guy many years ago who had patented a clamp for frameless glass panels that didn't require drilling the glass; primarily used for pool fencing.
The problem was as soon as he got the patent, it was available to view in countries where the cost to enforce his patent wasn't viable, and the market very quickly filled with cheap imitations. He straight out said at the time he regretted getting the patent.
jrowen 4 hours ago [-]
It is also asymmetric. If you announce your presence, even if 4 out of 5 civs that notice you don’t annihilate you immediately (but they probably should), the fifth might. It’s just a probability game, with permadeath.
So hiding is the most rational - the only - strategy of survival.
This is a paranoid and cynical strategy that doesn't win out in the known history of life. What works is grow, expand, mingle, maintain - assimilate but don't annihilate.
bostik 1 hours ago [-]
I always read the dark forest differently. Solution to the problem is not a game-theoretic "hide from the apex predators", but an even more nihilistic "remain hidden, expand and evolve into the apex predator".
Or in a more biblical sense: do unto others before they do unto you.
N_Lens 4 hours ago [-]
Most leaders in the Western/developed world have similar paranoid thought processes.
jrowen 4 hours ago [-]
Leaders are one thing, and sort of a product of the pressures of their position, but over longer time scales and evolutionary cycles, "isolate in fear" isn't really a dominant strategy. You're gonna get behind and get wiped out eventually, or be constrained to a hyper-specific niche.
cryptonector 4 hours ago [-]
Do China's or Russia's leaders not?
N_Lens 4 hours ago [-]
Yes they certainly do. Either leadership attracts people with these traits, or the position leads to cultivation of these traits, or both.
mekoka 3 hours ago [-]
A typical outlook from 21st century human thinking. We love to draw from our still rather actual history of fear and addictions to zero-sum games, to extrapolate the far advancement of other civilizations. As millennia go by, species can obviously only evolve technologically, while remaining psychologically, philosophically, and spiritually stuck.
chairmansteve 1 hours ago [-]
"As millennia go by, species can obviously only evolve technologically, while remaining psychologically, philosophically, and spiritually stuck".
Interrsting take, and possibly (probably?) true of humans. But is it true of other (alien) sentient species?
stego-tech 9 hours ago [-]
I’m still optimistic that this is cyclical in nature, and not an inevitable - or indefinite - outcome.
Humanity has endured regular cycles of shared enlightenment (usually accompanying profound technological or societal revolutions) and dark forests of protectionism, and we always find a way to the other side. Sometimes these cycles last a century; sometimes, but a few years. Still, we always make it to the other side.
In the case of LLMs, we have to make a few assumptions: that they will not lead to AGI, nor will we solve the problem of real-time learning or context windows. These are, admittedly, huge assumptions, but the current state of AI and compute suggests a nugget of truth to them for the time being. If that’s the case, then perhaps this “dark age” of the dark forest is bounded by the limitations of silicon-based computing (hence the push towards Quantum) and the human frustration with diminishing returns from technological investment. As artisans and brilliant minds withdraw, the forest risks starvation and withering from a lack of sustenance; if humans withdraw from technology because they must hand over IDs and personal data, because to engage with technology is to surrender to surveillance and persecution, then the natural trend will be to withdraw over time - and the markets will adapt accordingly, with or without external/government intervention.
That is to say that the dark forest only lasts as long as its inhabitants decide to persecute each other for daring to light a path forward. Right now, the incentives very much favor those willing to harm others for personal enrichment; that is not always the case, and humans decide when that reasoning becomes vilifiable.
chairmansteve 1 hours ago [-]
You are right. In the dark forest, the predators must eventually die out, because they can't find prey.
middayc 9 hours ago [-]
I seem to get into sort of existential crisis every few moths with the progress that llm-s are doing. I probably fool myself for a while that "it's not real", then at some point I can't fool myself any more - then I accept it somewhat ... then the new progress happens and it cycles again.
But as it's written at the top, this was a thought experiment, not a prediction. And while I tried to put all the bad scenarios on the table (with the theme of the dark forest that is), I think I again found a sense of optimism, because I also think this thought experiment has flaws.
So I hope, that after a while I will be able to write the contrary, I've already written down some points about it - I already have a title. But we will see. I am more optimistic after I wrote this than before. :P
stego-tech 8 hours ago [-]
[dead]
alembic_fumes 11 hours ago [-]
> This is the true horror of the cognitive dark forest: it doesn’t kill you. It lets you live and feeds on you. Your innovation becomes its capabilities. Your differentiation becomes its median.
Oh no, the terrible dystopia where anyone can benefit from anyone else's good ideas without restrictions! And without any gatekeepers, licensing agreements, copyright, and not even a lawyer in sight!
If this is the dark future that AI use brings for us, I say bring it. Even if it means that somebody gets filthy rich in the process, while making the rest of the humanity better off.
entropi 10 hours ago [-]
Unless you don't own the data centers yourself, you only get what they allow you you to. And those gatekeepers, lawyers and licencing agreements; while certainly not perfect, did let people monetize their intellectual work. Also, I think it is incredibly naive to think the owners of the compute and the energy won't play the hardest gatekeeper the world has seen, when the conditions become right.
zenogais 11 hours ago [-]
Might just be independent discovery, but the main idea of this blog post is more or less the exact theory advanced in the recent book "The Dark Forest Theory of the Internet" by Bogna Konior (https://www.amazon.com/Dark-Forest-Theory-Internet-Redux/dp/...).
middayc 11 hours ago [-]
Well, I didn't know for this book, so I suspect or hope the exact points that I make won't map to the ones from the book.
It is true that the original "The dark forest" book made an impression on me, so I was thinking about its theories often and trying to apply them to various situations.
zenogais 11 hours ago [-]
Yeah, I fully believe independent invention by mapping "the dark forest" onto the internet is very possible.
hrimfaxi 9 hours ago [-]
The irony is that it undermines the premise. Multiple people independently arriving at the same conclusions means that you can hide your ideas from the dark forest but that won't stop them from being uncovered.
middayc 8 hours ago [-]
Interesting irony :). If you don't produce the idea in fear of just feeding the forest, someone else will, so it might as well be you. It's true that this is very similar as current dilemmas some people have about their ideas.
The difference is that now people see just the outer shell of your ideas - but if you use LLM-s to search, explore, code your ideas, the system "knows" it all, or even more than you, given that it can "cross-pollinate".
p2detar 11 hours ago [-]
Interesting. How does this book stack up to Maggie Appleton‘s Dark Forest hypothesis? It’s been some time already since she made it.
“These AI tools are garbage and can’t create anything worth creating”
“These AI tools are so powerful they can steal your ideas with nothing but a sentence”
I know that’s not exactly what OP is saying but the pretentiousness of the “we knew better” got to me a little bit. I think it’s a cool and unique analogy but I’m not as pessimistic.
Ideas have become so cheap to try/experiment with, more people are able to try 10x more or whatever, and that may keep increasing, I think there are way less hunters than hunted
movedx 10 hours ago [-]
If AI makes replicating other people’s ideas faster and easier, thus allowing capital-heavy market players to just absorb whatever idea you manage to execute, then perhaps, somewhat ironically, the economic moat you’ll have is your human nature, contact, and time? Perhaps we’ll see a shift in sentiment towards wanting to deal with and spend time with the people in the business, rather than just what the business can do for you and yours from a software perspective?
I believe the idea of “off-shoring” your IT is a good example of this. My brother works for a business whose clients would drop them the moment they off-shored any aspect of their IT support. Not because of data sovereignty, but simply because they value them being on-shore, in the same time zone, and being native English speakers. And this is despite the fact it would drop the prices they’re paying for IT by 30-40%.
storus 7 hours ago [-]
This has some grain of truth though companies would only execute your ideas if they don't destroy their own business. Imagine creating your own Bloomberg Terminal/Capital IQ using agentic AI - you'd directly attack incumbents and not give them more profitable ideas. For potentially profitable ideas one could just look at all companies Google/Meta bought in the past and killed, then just redo them using AI.
spartanatreyu 7 hours ago [-]
This post puts forward two paths:
1) Everyone and everything is subsumed into the forest. Innovation becomes unprofitable for the innovator as the one who controls the forest uses their capital to clone every new innovation.
2) Everyone withdraws from the forest. Innovation goes private. The forest stops growing, but doesn't die.
---
But there's two things the post doesn't consider:
1) Viral licensing.
What happens to a model if it is trained on data that comes with a license? What happens if the laws that be decide that the model producers, the models and the products of the models themselves must follow the conditions of the licences. How will that affect the model producers? What if customers don't want to be beholden to those licenses? What happens if the conventional wisdom is to avoid models to avoid lawsuits? What happens when models, model producers and customers power lawsuits against (other) model producers? Where would the new equilibrium between model producers and innovators move to?
2) Non-profits models
What happens to model producers if customer shift to become non-profits themselves, specifically those that pay employees instead of model producers. Would the model producers become starved out? Or would they need to switch to non-profit status as well? How would model producers, the models and the forest as a whole change if profit no longer became the priority?
daemin 7 hours ago [-]
I asked recently on social media if anyone knows if there has been a legal decision regarding if GPL source code that was used for training an LLM will taint all that LLMs output with the same GPL licence. So far nothing has come up but I think people are wanting to know the answers.
It has been said that Microsoft indemnifies people using its LLM tools against copyright and patent issues, but I don't know if it applies to LLM output which might/should be GPL licenced.
middayc 7 hours ago [-]
As the first line of the post says - it's a thought experiment, so comments like yours that open new options and ask new questions are the best outcome.
I have no other comment other than - very interesting. I thought about how the overlying model will change for us, but haven't considered that the underlying model (what you proposed) can change too ... if that makes sense.
mannanj 7 hours ago [-]
what about we properly implement copyright and protection for software to prevent cloning style theft?
I mean we haven't had an innovation in patents and trademark for software for how long? Why is it that only hardware can be copyrighted and trademarked - can we really find no way to do this that can't be abused by patent trolls?
mannanj 7 hours ago [-]
[dead]
layer8 10 hours ago [-]
> Resistance isn’t suppressed. It’s absorbed. The very act of resisting feeds what you resist and makes it less fragile to future resistance.
On the other hand, if your primary goal is to change the world, or “be the change you want to see”, maybe being public and feeding it isn’t so bad, especially if others don’t?
caycecan 11 hours ago [-]
Near the end you start to describe the paradigm the machines build in The Matrix. Neo is the aberration they seek to reincorporate to sustain their inability to innovate.
bonoboTP 11 hours ago [-]
Valuable ideas have already been those that others find unintuitive and it's kinda hard to get people on board because they are skeptical and they need long form, tailored explanation for them to get convinced. If a short elevator pitch convinces them to go home and try to build it, it's probably already being considered by others.
boutell 9 hours ago [-]
OK, so maybe we're headed for a dark forest scenario as far as profit driven startups go.
But if your goal is simply for the thing to exist, there is a strong incentive to share.
mmaunder 4 hours ago [-]
The only barrier to a flourishing truly open source AI model ecosystem is the cost of training a highly capable model. This will get as cheap as it is to buy a computer and contribute to Linux. OSSAI movements will replace traditional OSS. And as with software, the early Slackware-like versions will be poor substitutes, but it will get better and then dominate.
mikewarot 8 hours ago [-]
In a recurring metatheme when it comes to AI and coding, I call bullshit. It's been 80+ years since we had a really great idea introduced, in "As We May Think" by Vannevar Bush[1,2]. We still don't have a Memex. Hell, we don't even have a standard way to add annotation[3] on top of hypertext. No matter how useful the idea is, and how much some of us want it, it just isn't going to happen because of copyright.
Instead, we've got the slop[4,5] that TBL came up with, and it stuck.
The best ideas aren't the most profitable, and thus remain outside the goals of the "Dark Forest". The best thing to do is to just have fun, and not worry about profit, like this man, his cats, and his use of the 3d printer to make a train for them.[6]
> The platform doesn’t need to bother with individual prompts - it just needs to see where the questions cluster. A map of where the world is moving.
This was insightful, but is it much different to the kind of data google and other search engines have had access to for a long time?
And while LLMs might have sped up the rate of code generation, the tech giants have always been able to set a team on reverse engineering whatever they feel like, though they also often just bought up the startup that was producing what they wanted. I guess I'm not seeing exactly where LLMs specifically are creating the dark forest, rather than the consolidated, centralized tech landscape itself
OrangePilled 4 hours ago [-]
My working thesis is that anxiety over AI-generated material is worry about having control over the 80–90% of human output that affords most of society with a comfortable, affirming life.
rglover 9 hours ago [-]
This feels dramatic. The parts are there to rebuild a better web. If you want to build it, build it. But most still want the money, they just want to get it while also retaining the moral high ground. A VPS can still be had for cheap. Code is now "free" (not necessarily good code but like you suggest "good enough"). The only thing stopping you at this point is your own ego (and its expectations of success).
"You wanna escape Armageddon, read a different book." - KRS-One
It actually points out the completely opposite and I liked that quite a bit
That AI allows us to get back the open web in in a way.
andai 5 hours ago [-]
Did you use AI to write this? My perplexity sense is tingling ;)
orbital-decay 10 hours ago [-]
Some of that is rose-tinted glasses.
1. Sharing was never really safe, open source by default only became possible because of SaaS and rent-seeking behavior.
2. Early web (not internet) wasn't hyperconnected. With the advent of global-scale social media it was immediately obvious to many this will lead to monoculture and reduced diversity. What thought to be the information superhighway became the information superconductor with zero resistance, carrying infinite current. Also known as short circuit.
convexly 7 hours ago [-]
Most people already have this problem with their own thinking though. You make a big call at work, it plays out over 6 months, and by the time you know whether or not it was right you've already rewritten why you made it. That feedback loop barely exists.
noident 12 hours ago [-]
The LLMisms in the "thinkpad" section caused me to close the tab
hal9zillion 2 hours ago [-]
Yep this was probably the most LLM generated thing I've read all day and that's saying something. Brutal.
Everyday I see dozens of huge posts where someone has generated a wall of text that's expressing a very simple, very derivative idea and I see tons of earnest people replying with posts that the they've written themselves seemingly unaware of this and it really pisses me off to be honest. If you are going to generate your thinkpiece like this there should be an international law that says it can't be longer than two sentences.
fer 11 hours ago [-]
It's closer to broetry than llmism in my eyes.
middayc 11 hours ago [-]
What LLMisms?
noident 11 hours ago [-]
No W. No X. No Y. Just Z.
In fact, the whole article is filled with slopisms, just with the em dashes swapped for regular dashes and some improper spacing around ellipses to make you think a human wrote it.
abnercoimbre 12 hours ago [-]
Yep, time to flag.
beej71 11 hours ago [-]
Makes me think of rebuilding libraries with AI to change the license.
jchook 4 hours ago [-]
I would rename "the dark forest" to "the interesting horizon"
JeremyHerrman 5 hours ago [-]
I reject pretty much every point of this article, and I worry that it will lead readers down two dark roads: apathy and secrecy
do you really think bigco is going to steal your vibecoded app just because you used their API? ridiculous. They could already do this before AI with their army of devs.
should you hide all of your ideas until they're perfect and ready for millions of users? we all know this goes against a core tenet of startups which is still true today: launch early and often.
promptfoo/openclaw weren't cloned by openai when they got poplular, they were bought for real $$$
also, regarding this:
> 2009, I bought a refurbished ThinkPad, installed Xubuntu, and started coding.
you can still do this, even with that same 2009 thinkpad. the hard work is in getting your app out there in front of people, coding is just a small piece of a successful business
gorgoiler 3 hours ago [-]
But then why don’t the big corpos take each other down by vibe coding each others offerings until only one is left?
You build your product audience off the back of your community and sense of taste just as much as the code itself. I love what Brad does with liliputing.com. I love what dang et al do with this place. I love what Stephen Lavelle does at increpare.com. 3Blue1Brown, Steve Ramsey’s Woodworking for Mere Mortals, Don’t Hug Me I’m Scared… I guess I’m straying into content not just code but the underlying theme is good taste and good ideas and a good workflow through craftsmanship and custom tools*.
You won’t make billions but you’ll make something worth engaging with. If anything, I’m looking forward to a future of more creators not fewer.
* Oh! Vibecoding is 3D printing and AI slop is land-filament? Doesn’t mean you can’t do amazing things with an LLM / X1 Bamboo, just that if you don’t put much effort in then… it shows!
skybrian 9 hours ago [-]
How about putting an idea or a vibe-coded demo out there in hopes that others will copy it, because you want it to exist and become more common? But it's less work if someone else maintains it.
This is free as in free puppy.
stephen_cagle 8 hours ago [-]
I think the most interesting idea here is the idea of people purposely keeping secrets in order to maintain advantages.
Beliefs: At this time, I do not actually believe that LLM's can innovate in any real way. I'm not even clear if they can abstract. I think the most creative thing they can do is act as digital "nudgers" on combinatorial deterministic problems; illustrated by their performance on very specific geometry and chemistry problems.
Anyway, my point is that I think they may still need human beings to actually provide novel solutions to problems. To handle the unexpected. To simplify. LLM's can execute once they have been trained, but they cannot train themselves.
In the past, the saying in silicon valley was often "ideas are cheap". And there was some truth to that. Execution was far more difficult then the idea itself. Execution was so much more difficult than "pure thought" that you could often publicize the algorithmic/process/whatever that you had and still offer a product/service/consultancy that made use of it. The execution was the valuable thing.
But LLM's execute at a cost that is fractional of human cost and multiples of human development speed. The idea hasn't increased in value, but the execution cost has decreased markedly. In this world, protecting the idea is far more valuable than it is in the previous world. You can't keep your competitors away by out executing them, but you can keep them away if you have some advantage that they do not understand.
And, I agree, that is quite worrisome. If people don't share knowledge then knowledge disseminate much more slowly as everyone has to independently learn things on their own. That is a frightening future.
SirensOfTitan 10 hours ago [-]
I don't quite remember the details, but there's a fascinating section in Julian Jaynes's "Origin of Consciousness in the Breakdown of the Bicameral Mind" where he talks about how metaphors condense down into more complex forms, and as they do they unlock new realities previously impossible to fathom. The classical example here is the simultaneous discovery of calculus by Newton and Leibniz: the larger context defines what is possible.
I was recently running myself through a thought experiment similar to the author here: if LLMs truly do make generation of ideas cheap (I'm still a skeptic here even within software), then as soon as products enter the public awareness they become trivial to reproduce. For example, in a prompt like: "Uber but for babysitters," "Uber for" is doing a tremendous amount of work. Before Uber, its model, UX, modes of engagement would've taken pages and pages to describe, but after, it becomes comparatively much cheaper.
... in this way, LLMs could cheapen ideas and creativity so much that they make other factors (which are already the weighing functions) more important, and I think the imbalance here is deeply troubling. Those factors are namely network effects (existing customers, brand recognition, existing relationships, capital). And when balance is shifted more toward network effects, it means that the whole system becomes more brittle because it makes it even harder to boot out incumbents.
There are a whole slew of issues with LLMs, particularly around their intended devaluation of labor, and we aren't talking enough about them.
imrozim 3 hours ago [-]
the final recursion point is the most honest part you can't warn about the forest without feeding it. but i'd push back slightly on the inevitability. the forest needs novelty to absorb, which means the edge always exists, it just keeps moving. the question isn't whether to hide but whether the speed of individual innovation can outpace the speed of absorption. so far it still can barely.
kadhirvelm 12 hours ago [-]
Honestly my hope is the arbitrage that allowed big tech to make the kind of margins it does on software starts to go away because it’s sooo cheap to build software. In other words, defending the technical moats that we rely on today doesn’t make sense in the future because it’s not a reliable way to make money. Aka no need to protect your technical secrets because there’s no capitalist reason to lol. Taken further, my naive hope is societal attention moves away from this layer and onto whatever becomes the new way to make money and the people left paying attention to software are big on sharing
simianwords 10 hours ago [-]
Can someone explain what I'm missing here?
If we are talking about releasing OpenSource software, they can already be used by companies with zero effort.
I'm guessing the author is talking about released closed source software or simply talking about ideas? What kind of serious company or startup is building in the open and sharing trade secrets or ideas?
I'm genuinely confused and I think this article is pure slop without any core idea.
__MatrixMan__ 10 hours ago [-]
I think this only applies to a rather narrow set of ideas.
I'm not really interested in pursuing ideas that stop being good if somebody gets there first. If I bothered to design it its because I wanted it to exist and if somebody makes it exist then I'm happy because then I get to use it.
So what kind of things does this apply to? Likely, it's zero sum games, schemes to control other people, ways to be the first to create a new kind of artificial scarcity, opportunities to make a buck by ruining something that has been so far overlooked by other grifters. In other words: bad ideas.
If AI becomes a threat to those who habitually dwell in such spaces, great, screw em.
In the meantime, the rest of us can build things that we would be happy to be users of, safe in the knowledge that if somebody beats us to it, we'll happily be users of that thing too.
pillefitz 3 hours ago [-]
Do you think there's enough such opportunities that we'll all be able to pay our rents?
__MatrixMan__ 2 hours ago [-]
If we dispense with the waste that goes into fighting slices of the pie and focus instead on making a bigger pie? Absolutely.
If we try to double down on the zero sum games that we learned from our parents, maybe not.
with 4 hours ago [-]
if the idea can just be obliterated by an LLM, there was never a moat to begin with
HtmlProgrammer 3 hours ago [-]
> meat doesn’t scale
great oneliner
mwkaufma 8 hours ago [-]
Like Cixin Liu's "Dark Forest" which inspired the author, this is science fiction.
LLMs do not have and cannot obtain the capabilities the author is hand-wringing about, and the current much-hyped apparent-productivity will pop with the bubble & corps have to start paying full-price for chatbot access.
ginko 12 hours ago [-]
>You are creating your cool streaming platform in your bedroom. Nobody is stopping you, but if you succeed, if you get the signal out, if you are being noticed, the large platform with loads of cash can incorporate your specific innovations simply by throwing compute and capital at the problem. They can generate a variation of your innovation every few days, eventually they will be able to absorb your uniqueness. It’s just cash, and they have more of it than you.
That's not exactly a new phenomenon and doesn't require AI. If anything that was worse in the 90s with Microsoft starving out pretty much any would-be competitor they could find.
Platforms cherry-picking successful ideas and stealing them isn't new. Platforms could do this because they had the capital and the platform (distribution).
What is different is, is that LLM platforms literally have world's thoughts, ideas, conversations and a big part of the code/can generate it. It's like "pre-crime" ... they could copy your idea, or capture a trend brewing and replicate, before you even released it.
king_phil 12 hours ago [-]
Dark forest makes no sense to me. Why would a civilization eradicate another, spending huge amounts of resources (time, energy, material) when the universe has such an enormous scale that you cannot even get to each other in a timescale that makes much sense...
cbau 11 hours ago [-]
To quote from the book:
> “First: Survival is the primary need of civilization. Second: Civilization continuously grows and expands, but the total matter in the universe remains constant. One more thing: To derive a basic picture of cosmic sociology from these two axioms, you need two other important concepts: chains of suspicion and the technological explosion.”
1. you can never know the intentions of other entities, and they cannot know yours (chain of suspicion)
As soon as you identify another entity in the forest, even if they cannot annihilate you at present and signal peace, both could change without warning. Therefore, the only rational move is to eradicate the other immediately. (Especially if you believe the other will deduce the same.)
Elimination in the book is basically sending a nuke, not a costly invasion force.
not sure it actually is true, but that's the argument in the book
jmull 11 hours ago [-]
I really liked those books, for all the creative ideas... it's fine that they don't all work, but the Dark Forest has to be among the worst of them. It was unfortunate it was highlighted.
Some rebuttals, going point by point...
1. you can know the intentions of other entities by observing and communicating with them.
2. technology explosions, like pretty much exponential phenomena, are self limiting. They necessarily consume the medium that makes them possible.
3. and 4. civilizations aren't necessarily sentient (ours certainly isn't) and don't have an agency, much less goals. Individuals have goals, and some may work for the survival of the civilization they belong to. But others may decide they can profit if they work with the aliens.
4. Multiple civilizations may well come into competition over resources, but that's more of an argument about why the forest would not be dark.
Practically speaking, a civilizations that opts to focus on massive, vastly expensive efforts to find and exterminate far flung civilizations because they may become a rival in the future may be easily outcompeted by civilizations that learn to communicate with and work with other civilizations they encounter.
iugtmkbdfil834 10 hours ago [-]
To an extent, rebuttals land.
However,
1. You are assuming a lot in the sense that you assume presence of intention -- not something guaranteed to be a feature of an alien civilization, which is, well, alien. People think that anthropocenrism only applies to body shape and having legs, because the way it tends to express itself in popular culture is robots on legs and human body shape in aliens.
And same point goes to communication; just assuming you could is a big leap.
2.Bold assumption that they are self limiting. I think the real question is what , exactly, tends to limit it. I think the answer tends to be resources, which is the foundation of dark forest argument theory to begin with.
What I am saying is that it is not a rebuttal you think it is.
3. :D yes
4. You may be again imposing human perspective on as scale that goes a little bit beyond it.
I will end with a.. semi-optimistic note. I am not sure dark forest theory is valid. We are speculating mostly based on human tendencies. By the same token, I posit that we are about as likely to be turned into an art exhibit by a passing alien artist not unlike some ants that had molten metal poured into their nests [1].
You can observe patterns of behavior, develop theories understanding, attempt/experiment with interactions, and refine based on the results. That's communication (and doesn't assume anything about the other alien civilization).
Now, civilizations may be more or less willing to do this and more or less successful, but that's not the same thing as no one will dare try, as the dark forest theory wants.
(Personally, I think civilizations that are better at this will outcompete ones that are worse or refuse, though that's just my own opinion.)
> Bold assumption that they are self limiting.
Name the exponential phenomena that aren't self limiting -- that don't consume the medium which allows them to exist in the first place.
> I think the answer tends to be resources, which is the foundation of dark forest argument theory to begin with.
Well, yes. One of the reasons the dark forest theory isn't coherent.
> Any real alien reasons would be alien to us.
Yes, but this doesn't back up the dark forest theory. It also doesn't mean aliens cannot be understood at any level or interacted with in any way.
(The dark forest theory makes very strong claims on the logic, intentions, strategies, resource use/governance of alien civilizations, BTW, and wants this to be uniform amongst them... even though the one civilization we actually know of doesn't adhere to them.)
recursivecaveat 10 hours ago [-]
Cleansing is basically free for advanced civilizations in the books. The alien (Singer) who wipes out Sol in the 3rd book doesn't even have to answer any questions from their manager about doing it, that's how cheap it is. While its true that individuals desire cooperation, I think you can assume that civilizations will keep a lid on people who will completely destroy them (or failing that, be destroyed). It seems like expansion of civilizations is not really an option. The Singer's civilization only has 1 colony world and they're already in some kind of extremely destructive war with them. Presumably the idea is once your own people expand multiple light years away, all the logic about aliens applies to them too. On the other hand if you can't expand why do you not run scorched earth on the galaxy?
There definitely is some weirdness about observation and communication: Singer's civilization can wipe out Sol with a flick of the wrist, but while they can observe the number and type of Earth's planets, that seems to be their limit. The sophon enables FTL communication and observation between Earth and Trisolaris, but the more advanced civilizations don't seem to make use of them? You could be absolutely certain of someone's threat level and intentions with one. Maybe something about the technology can be traced back to its origin system, so they are too risky to use.
I think it's all reasonable in the books, especially as a self-reinforcing state. It does definitely require a highly specific set of universal laws / technological constraints though. If the FTL drive didn't also broadcast your position to the whole universe for eg, it would crack everything wide open.
11 hours ago [-]
bethekidyouwant 11 hours ago [-]
That’s true among human societies as well, but trade leads to more prosperity.
AnimalMuppet 10 hours ago [-]
It's first-order thinking. Second-order would be to question whether trying to eradicate another race might motivate them to eradicate you, when they weren't motivated to do it before.
nate 11 hours ago [-]
Are you asking about the 3 body problem version of this? Spoiler alert: The folks doing the eradicating aren't spending much time/energy/anything on eradicating. It's one large missile through space.
I think the gist is: sure, we humans can't conceive of getting to anyone else in the universe in any timescale, but if we can keep ourselves from destroying ourselves, we'll eventually figure it out. And we'll spread. And we'll kill everything that isn't us in the process as we've done as explorers on this planet.
So really in 3BP: it's inexpensive to eradicate. But insanely expensive to possibly get the intention wrong of any other civilization you encounter. They might kill you.
(again, this is just my interpretation of what 3BP said)
11 hours ago [-]
thomashop 11 hours ago [-]
I don't think it's correct that we destroyed everything that isn't us. If we take all living beings, we have destroyed only a small percentage.
05 11 hours ago [-]
Not counting by total terrestrial vertebrate biomass.
piker 12 hours ago [-]
Makes some sense to me, as the prisoner's dilemma dictates at least some fraction will try to kill you. So you've got to go first.
Reminds me of the Dan Carlin take on aircraft carriers in World War II: if you in a carrier spotted an opposing carrier and didn't send everything you had before it spotted you, you were dead. The only move was to go all in every time.
Phemist 12 hours ago [-]
The dark forest is conditional on that it does not require huge amounts of resources to eradicate another civilization and that (over time) the universe turns out not to be of a scale enormous enough (and in the book there are agents working to actively make it smaller).
Bringing it back to the dark forest of idea space, it is an interesting question whether the the space of feasibly executable ideas being small (as this essay assumes) is inherently true, or more of a function of our inability to navigate/travel it very well.
If the former, then yes it probably is/will be a dark forest. If the latter, then I would think the jury is still out.
9 hours ago [-]
lifeformed 11 hours ago [-]
"Timescales that makes sense" may be a human reasoning but not necessarily the reasoning of inconceivably advanced timeless civilizations. Sure, that planet of fish may be harmless now, but what about in a quick three billion years when they have FTL and AGI and Von Neuman probes and Dyson spheres and antimatter bombs? Easier to click the delete button now to save the trouble later.
sebastianconcpt 11 hours ago [-]
Agree, is a fiction based in accepting the premise of zero-sum game.
It denies that more advanced civilizations might have better models of the universe where they know this isn't an issue and we're just stupid teenagers in the neighborhood playing dangerous games and merely taking a look every now and then to see if we prove we will survive ourselves.
0x3f 11 hours ago [-]
Competition kills margins (profits, security, QoL), so the budget for eradication should be quite high, but generally speaking the idea is to destroy even fledgling upstarts, back when the cost is low.
lstodd 11 hours ago [-]
And the idea does not make sense once you include intel being incomplete into the equation: what if the preemptive strike will not attain complete eradication?
You might or might not fatally cripple the opponent, but retaliation can do that too and you cannot be sure that it won't. It's MAD all over again.
0x3f 10 hours ago [-]
Well if they're only an upstart, they don't have the ability to destroy you _yet_. You 'nuke' them in the hope they won't get that ability. You're aiming to stop MAD from being a thing.
In those terms, the US should have been nuking and dominating everyone, and the idea was floated after WW2, but I believe they were precluded by practical limitations.
If they had developed the tech outside of wartime, and built up a stockpile, maybe that is indeed what would have happened and we'd have a one-world government already.
middayc 7 hours ago [-]
The dynamics at least with the space dark forest is different than when we live on the same planet. It has to do with lack/slow communication over vast space (that you can't trust anyway).
It relied on two principles "the chain of suspicion" and "technological explosion", which don't hold true if we are on the same planet. You can google it (or llm it) :)
lstodd 10 hours ago [-]
Point is you cannot know if they are an upstart (whatever upstart means). It can be misinterpretation, it can be camoflage, it can be anything. But once you rain death you're better be prepared to be grateful for what you are about to receive back.
0x3f 10 hours ago [-]
Depends on the context. We certainly knew nobody else had nukes.
lstodd 10 hours ago [-]
That.. was the case for all of four years. And forgive me if I doubt certainity.
0x3f 10 hours ago [-]
Four years is plenty of time to start launching. Also, MAD incentivizes disclosure. What would be the point of having secret nukes? Openly having them is the only way to stop the US using its nukes to stop your nuke program, in this scenario.
Hikikomori 11 hours ago [-]
A space war is not needed, they could just send a few missiles to take out anyone.
I have my own theory of dark forest and AGIs. That there's some collection of AGIs out there allowing evolution to develop intelligence anywhere it happens and takes them out once it produces an AGI, or if it doesn't performs a reset. They have literally all the time available to them, can easily travel the vast distances if needed.
viccis 7 hours ago [-]
It's a silly concept IMO because it assumes that civilizations with the ability to do interstellar travel or communication make the decision to not do so because they have knowledge of an interstellar force that destroys any civilization that does so. It would seem like any civilization that becomes aware of such a force would be destroyed, so how would all of these surviving ones know of the danger? Actual dark forests are quiet because a mix of the animals' instinct and visible signs of danger.
While it's possible that some civilizations would hypothetically be able to observe what happened to others and keep quiet, they would all have to do so to solve the contradictions of Fermi's paradox.
mememememememo 7 hours ago [-]
Yep AI has made it X times easier to successfully make millions copying someone's idea.
X=1.0000001
xstas1 11 hours ago [-]
This maps nicely to Cybermen in Dr Who
akabalanza 10 hours ago [-]
Big up for the reference
mpalmer 11 hours ago [-]
As a work of persuasive writing, this is unfocused and seems mostly generated.
One thing I would have expected of someone who knows their history - forget LLMs, this is how startups have worked for decades now. You're only as good as your idea, your ability to execute, and your moat. And the small fish get eaten.
> The original Dark Forest assumes civilizations hide from hunters - other civilizations that might destroy them. But in the cognitive dark forest, the most dangerous actor is not your peer. It’s the forest itself.
Note the needless undercutting of the metaphor for the sake of the limp rhetorical flourish.
> I wrote this knowing it feeds the thing I’m warning you about. That’s not a contradiction. That’s the condition. You can’t step outside the forest to warn people about the forest. There is no outside.
Quite dramatic!
Except literally going outside and just talking to people? Using whiteboards?
Also, you fed it when you used a model to write this blog post. You didn't have to do that.
10 hours ago [-]
Chance-Device 10 hours ago [-]
> I wrote this knowing it feeds the thing I’m warning you about. That’s not a contradiction. That’s the condition.
HN needs a better AI slop filter.
Or maybe I do. Maybe I can vibe code a browser extension that pre loads TFA links and auto hides anything that isn’t sufficiently human authored.
jauntywundrkind 11 hours ago [-]
The view here shows big huge powers of technocapital consuming all else, stealing every idea.
My hope is the opposite. Integrative, resonant computing (https://resonantcomputing.org/https://news.ycombinator.com/item?id=46659456 although I have some qualms with it's focus on privacy), with open social protocols baked in seems like maybe possibly can eat some of the vicious consumptive technocapital. In a way that capital's orientation prevents it from effectively competing with. MCP is already blowing up the old rules, tearing down strong gates, making systems more fluid / interface-y / intertwingular again, after a long interregnum of everything closing it's APIs / borders.
People seem so tired and exhausted, so aware of how predatory the technosystems about us are. But it's still so unclear people will move, shift, much less fund and support the better world. The AT proto Atmosphereconf is happening right now, and there's been a long mantra of "we can just build things"; finding adoption but also doing what conference organizer Boris said yesterday, of, "maybe we can just pay for things", support the projects doing amazing work: that's a huge unknown that is essential to actually steering us out of the dark technology, where none of us get to see or get any way in how the software-eaten world arounds us runs, where mankind for the first time in tens or hundreds of thousands of years been cut off from the world os, has been removed from gods's enlightenment / our homo erectus mankind-the-toolmaker natural-scientist role.
I think the answer to the Dark Forest fear to be building together. To be a radiant civilization, together. To energize ourselves & lead ourselves towards better systems, where we all can do things, make things, grow things, in integrative social empowering ways.
middayc 11 hours ago [-]
I hope the open source models / crowdsourced approaches to training will also be an important part of the ecosystem, keeping it honest and providing an exit. Similarly, as it does for operating systems and other important software.
But I don't see a trend of big companies really opening up. They usually open only if it benefits them (which can also happen and did happen in various scenarios). Everybody is accepting and open when it's trying to grow and is closing once it can reach a monopoly.
rubyn00bie 4 hours ago [-]
While I agree with the sentiment, and even had the same fears, I think about it differently now…
The existing megacorps have huge swaths of infrastructure, expenses, and requirements that require massive amounts of capex to maintain. Even if performative, Meta, Google, OpenAI, Anthropic, et. Al cannot simply layoff their entire engineering, accounting, HR, sales, and support infrastructure. Those orgs are large for “good” (historically necessary) reasons.
Now fast forward to today, and this is where I differ in opinion, it is our megacorps are the civilizations who should be scared of being discovered. Minus infrastructure providers, they are the large advanced entities which can be annihilated by someone with a decent budget and a good local model.
For ~$30k-$50k (primarily buying RTX 6000 pro GPUs and a CPU with enough PCIE lanes) “anyone” can build a system using open weight models that, and let me truly emphasize this: autonomously create functionality to compete. Previously it would take me months, or years, of immense dedication to show up after work and produce something of value. Now I can do it using excess compute on my existing workstation. No existing corporation can afford to undercut every possible idea. If I only gave 1000, 10,000, or a 100,000 users they cannot compete. That may, and I believe it will, provide more than enough capital to attack that megacorps X or Y. If I’m making $100k a month, I can afford multiple autonomous systems per month. After that initial capex, I can then hire other people to help manage them. At no point will a company with billions upon billions of dollars in quarterly capex be able to compete.
Maybe they can compete with one, two, ten, or a hundred but they cannot compete with the absolute onslaught on thousands of possible frontlines. They can cut costs, by reducing their workforce, but they’ll only be increasing their competition to save their earnings report.
And yes, I realize that the open weight models are created via obscene amounts of capital, but we’re lucky that competing nation states, and cultures, like China have immense incentive to do so. Good enough, is still good enough.
The forest may be dark, but it won’t be for much longer.
tldr; call the an ambulance, but not for me. It’s going to be for the existing power structure.
zhoujing204 7 hours ago [-]
Liu Cixin's Dark Forest theory is a pretty dumb take, honestly. Just look at Earth — different species don't constantly try to wipe each other out. Sure, it happens sometimes, but it's actually relatively rare, and a lot of the time extinction isn't even intentional. Like, a huge chunk of Native American deaths came from disease, not deliberate extermination.
At the end of the day, Liu Cixin is basically a social darwinist who's got a thing for authoritarianism, and it bleeds through pretty heavily into his work. Dude is massively overrated imo.
irl_zebra 7 hours ago [-]
I think the book specifically and explicitly covered the "dark forest doesn't apply when species are near one another" angle.
zhoujing204 7 hours ago [-]
How far is "near," really? Human civilization took tens of thousands of years just to discover a new continent, and the ocean back then was essentially as vast and impenetrable as space is today. If we ever actually develop near-lightspeed spacecraft, are we seriously assuming the first thing we'd do is build weapons capable of annihilating entire civilizations — and then actually use them? Oh my god, we already have those weapons, and the most likely target has always been ourselves.
e7h4nz 7 hours ago [-]
[dead]
aplomb1026 10 hours ago [-]
[dead]
ben8bit 12 hours ago [-]
[dead]
thomastjeffery 9 hours ago [-]
[dead]
woopsn 6 hours ago [-]
Relatively grandiose post sweeps in all kinds of claims about the universe to merely warn a big corporation can copy your idea easily.
middayc 6 hours ago [-]
Guilty :]
positron26 3 hours ago [-]
Let me write a more interesting body. So hiding is the most rational - the only - strategy of survival.
In the beginning, you reached out with reckless abandon. It was fun to banter with dogs online. Nobody would ever see unless they were looking through your wall. There was no search. No comment history. Bumping into someone in the vast night was enough of a miracle. Why hold back? There are some forum warriors on some PHP BB somewhere, but the domains they rule are insignificant. If you're talking to someone, your motivations are rooted somewhere in the grass.
First came the like button. Rather than blindly hoping what you say resonates with the sensibilities of people you probably knew IRL, rather than present your genuine self because there were no scores, the incentive signal would begin to distort us. Then the newsfeed meant that if you got enough likes, you might get a moment of fame. We all knew it was a terrible idea, a force that would only corrupt us. The personal nature of disjoint little walls living in isolation, was becoming replaced by global stack-ranking.
Then the algorithms came. With them came content marketing to jump the line. At first the ten blue links were filling in the sparsity. Along with that came only a little bias, connecting semantically distant topics, but with a little bit of a feedback loop, an resonator with an unknown response curve. Engagement could be measured, and before long, we were chasing the same likes we used to train the system, and trained by our likes, attracted we became to mysterious stable manifolds, chasing the chase we ourselves define, like Nascar, but insidiously more stupid.
Little by little, the incentive trails no longer lead back to the grass. Reality became suspended without support, a self-sustaining virtual reality determined to fight you to prove that it exists, to prove that its conclusions were right. Every out group is understood to be an echo chamber, an ant mill spiraling helplessly, yet cynically, those who understand these mills best also wind them up like beyblades to crash them into other communities, seeking advantage with the asymmetry of outrage. After the battles, say what was made common to say, and you will be rewarded.
The spinning wheels cannot steer themselves and instead are dictated by whichever chaotic divergence generates the most powerful local gravity well, but because the goal of most is to harvest karma at the bottom, and because the mass controls where the bottom is, over and over we find ourselves pushing all others into the nearest pit to more quickly generate the illusion-giving singularity.
Like Darth Nihilus, the internet seeks only to feed, to feed on the validation that only the internet can give, the permission-giving blessings it needs to tell itself why the grass is wrong. All those who speak of grass are wrong. All those who smell of grass wreak and are wrong. We must destroy the grass, all those appeal to grass. After all grass is dust, at last we will project our utopia into reality. At last we will be not only right but so right that our beliefs will project back into reality.
The spaces within this over-connected, globally addressed world grows into a new kind of sparseness, one where all knowledge of grass must be concealed. Those who can ground the conversations in primary sources flee. Those who can color reasoning with nuance instead withdraw. Reality has retreated as the most dominant reverberations roam like the predator cities of Mortal Engines, looking for any invalidating observations to roll over and consume. Any real life must pretend to be a bot to blend in with the background radiation.
Less like Skynet and more like a zombie apocalypse, the threat comes from within, from among us, from our corruptions, from our karma seeking performances, from our lack of any commitment to any underlying reality, from our flawed belief that the information spaces is some kind of reality stone that enables active control instead of a mere reaction, the shadows on the wall, the murky results of the true forms.
Yet in this new darkness, a certain light has always held. What one wishes, one knows another has wished. What one respects, one knows another respects. No matter the limits of self-knowledge, no matter the information desert one has to cross at night to live in instinct, it is an infinitely brighter signal than the cynical self-corruptions of living for the machine, living to win the games whose rules it was our job to write. What one believes, one knows another has believed. Look into your own center and the true center of others you have known.
block_dagger 5 hours ago [-]
This is one of those naive takes from a human who thinks he is even 1/1000th the intelligence of ASI which is just on the horizon.
kilpikaarna 51 minutes ago [-]
What would a non-naive take look like?
shlewis 5 hours ago [-]
Pretty rude remark. And what makes you think that ASI is _just on the horizon_?
arionhardison 2 hours ago [-]
I find the selective framing here very telling.
When there's higher violence and lower property values in a Black neighborhood, people like OP are quick to blame Black culture. But when the "Cognitive Dark Forest" emerges from a community that shares its own common characteristics, suddenly collective accountability no longer applies.
When discussing violence in the Black community, it's "cultural." But when the subject turns to financial crimes or exploitation — where the per-capita ratios tell their own story — proportionality and population-to-crime-rate analysis mysteriously stop mattering.
It's difficult to take the "Cognitive Dark Forest" seriously as an existential concern when the people raising the alarm are so selectively offended. The crisis only becomes real when their innovations, their livelihoods, and their moats are threatened. Everyone else was supposed to just adapt.
The "Cognitive Dark Forest" is and will be continued to be perpetuated by "them" and if you really cared about the issue you would have addressed them.
wasmainiac 2 hours ago [-]
I’m sorry. Why are we talking about Black neighbourhoods?
Feel like we are trying to put the author in a bad (racist or classists?) light so we do not have to address the real issues touched on by the article.
mellosouls 2 hours ago [-]
What are you talking about? You appear to be responding to a completely different subject to the essay.
The Dark Forest idea and the original post resonates well with this.
I few days ago I created a new repo for a new block cipher explicitly not to be used. And directly got several mails from bots promising that they (claiming to be humans) had looked at my repo and they could include it into their portfolio of especially good projects they also had vetted. Being part of this portfolio would almost guarantee that my repo and project would be used. If I only paid them some money first.
Creating the public repo meant sending a signal out into the digital world where agents are hunting for the human prey/resorce to extract value from.
The repo in question: https://github.com/secworks/tau256
The ideal solution would be to remove the garbage, but right now we can't even detect it, let alone figure out a way to get rid of it. Besides, it's a zero sum game, why bother cleaning up when you can just effortlessly pump out more garbage in hopes that some of it will remain in orbit for long enough to benefit you.
Best you can do is to spread all of the goods it provides, as it is too greedy to not devour them itself. It will consume them and suffocate slowly.
I do pity the bug bounty people who rely on goodwill in their programs given that everything with a financial incentive is vulnerable. But otherwise the great thing about digital spaces is that there is, for practical purposes, unlimited space.
Every day there's another "how do you deal with the AI-apocalypse" article, I don't just ignore it
When I read if for the second time, trying to understand it - maybe even better match for the low orbit flying garbage would be "enshitification"? As the time goes on, more and more garbage is produced, and we have no clear way or specific motivated entity to start removing it so it just grows.
…whereas I feel what you’re describing is another Tragedy-of-the-Commons.
[1]: https://jackyan.com/blog/2023/09/google-search-is-worse-by-d...
“Execution is hard” was never about the code part.
Up until 2 years ago I was an engineer/entrepreneur. I could build anything. Other stuff, selling, supporting (execution) was hard.
LLMs made building some of the things I could build faster/easier, others not so much.
Well, the other stuff is still pretty hard. Maybe harder because there is a tonne of spam.
So feel free to share your ideas. Everyone’s gonna think they’re LLM generated anyways.
It certainly seems true that for small projects and relatively narrow scoped things that AI can replicate them easily. I'm thinking specifically about blog posts where people share their first steps and simple programs as they learn something new, like "here is how I set up a flask website", "here is how I trained a neural network on MNIST".
But if AI is empowering people to take on more complex projects, perhaps it takes the same amount of time to replicate the execution of a more advanced project?
In other words, maybe in the past, it would take me 10 hours to do a "small" project, which today I could do in 1 hour with the assistance of AI.
And now, with the assistance of AI, I can go much farther in 10 hours and deliver a more complex project. But that means that someone else trying to replicate this execution is still going to need around 10 hours to replicate it.
Basically, I'm agreeing that AI can reduce barrier to replicating the execution of another person's project, but at the same time, that we can make more complex projects that are harder to replicate. So a basic SASS crud app is trivial now but a multi-disciplinary domain specific app that integrates multiple systems is still going to be hard to replicate.
Already was well before AI, the difference now is that a few big AI providers risk becoming the ultimate rent-seekers that will increasingly capture all of the value of that commodified knowledge whether the original knowledge generators want that or not. There is no opt out, everything will be vacuumed up into the machine mind.
This will almost certainly lead to vastly increased amounts of wealth inequality (on top of the already unsustainable levels we have today) and possibly a very messy societal disintegration (this is theoretically avoidable, but I am not convinced it is practically avoidable given our current socioeconomic/political realities).
Bright future ahead!
The strategy is to quietly do several years of iterated hardcore R&D. The cumulative advances are such a step change when seen by would-be fast-followers that it obscures the insights that allowed individual advances to occur. As an exaggerated case, imagine if the public history of powered flight skipped from the Wright Brothers to the Boeing 737.
In practice, this strategy has a major failure mode that people overlook. The sharp discontinuity in capability means that almost nothing that exists in the market is prepared to integrate with it. This is a large impediment to adoption even if the technology is objectively incredible and the market will inevitably get on board.
In short, it looks a lot like being too early to market. This is surmountable with clever execution but with this strategy you've traded one problem for a different one.
[1] https://bitcoin.stackexchange.com/questions/4943/what-is-a-b...
We see this in jet engines, silicon fab, et al.
The blog post does touch upon this. The key difference, I believe, would be that compute scales in a way "meat-heads" doesn't, where if the other person has 100x the capital to throw at it, they could do the same 10 hour thing in 10 minutes.
Basically, what I got from it was that innovation has never been truly scalable enough to create the "dark forest", since hiring more and more engineers saturates quickly. But if/when innovation does become scalable (or crosses some scalability threshold) via AI, that could trigger a "dark forest" scenario.
Scientists who hold back publishing breakthroughs have not guaranteed that they will be the sole discoverer, just that someone else will inevitably be credited when they reach the same conclusions.
>You think of something new and express it - through a prompt, through code, through a product - it enters the system. Your novel idea becomes training data. The sheer act of thinking outside the box makes the box bigger.
This was the same before, if you had a novel idea and make a product out of it others follow. Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough. Especially LLMs/AI-Agents like Claude enable this speed you need for bringing out something new.
The next thing is that we also have open source and open weight models that everyone of use with a decent consumer GPU can fine-tune and adapt, so its not only in the hands of a few companies.
>We will again build and innovate in private, hide, not share knowledge, mistakes, ideas.
Why should this happen? The moment you make your idea public, anyone can build it. This leads to greater proliferation than before, when the artificial barrier of having to learn to code prevented people from getting what they wanted or what they wanted to create.
The article says:
"Ideas are cheap - execution is hard"
"Announcing, signaling your ideas offered much greater benefit than risk, because your value multiplied by connections, and execution was the moat you could stand behind."
That's the key difference. It used to be much harder for a competitor to catch up to the state of your implementation.
And they own - not rent the compute and models - as you do from them. If we want to extend this, they could "pre-cog" your idea and build it even before you do.
I'm not talking about what is happening now, I'm just playing out the thought experiment.
I am not arguing against sharing. Sharing can be for the greater good.
But as you note, things have changed. We could reasonably assume a genuinely significant good idea, set free, might go in the direction we shoved it for a minute. Or fade into inaccessibility.
Not any more.
> "Ideas are cheap - execution is hard"
I would argue this mantra says more about the person repeating it. It simply means the person has no good ideas and is bad at execution.
I've not met many but I'm sure there are many out there who are scary good at execution. Something like 1% transpiration, 99% experience. I can have a designer do a 100 euro design, hire someone to write nice code, rent a factory or an office, I might even be able to buy the machines at a good price. What I cant do is spin the rolodex and (in 20 minutes) land enough clients who would absolutely love to work with me again. I cant find those private meetings and wouldn't be able to extend my reputation with the new project.
People with good ideas don't talk about them unless it is required. They don't talk with "ideas are cheap" people, it's pearls for pigs. You can spot some of them if they did bursts of multiple unrelated complex patents. My favorite are the rube Goldberg type of machines that combine well known things in ways that exceed the sum of the parts. Something like step 5 uses the vibrations from step 1 while step 3 uses the heat from step 6.
To have good ideas you need many of them but you also need to know execution or you end up thinking the easy stuff is hard and the hard stuff is easy. Improvement is unlikely from there.
You've almost captured the full picture of it.
If you have a great idea, it's not going to be self-evidently enough of a great idea until you've proved it can make money. That's the hard part which comes at great personal, professional and financial risk.
Algorithms are cheap. Sure, they could use your LLM history to figure out what you did. Or the LLM could just reason it out. It could save them some work, sure.
But again - the hard part is not cloning the product, it's stealing your customers. People don't seem to be focused on the hard parts.
The risk is that they make the category a built-in feature in something people already use. At that point, copying the product and taking the customers start to look like the same problem.
It's not the risk it's being made out to be.
[1] Unless you're assuming that you maintain control over your technology while outsourcing most of the development thinking to a rented AI? Times have changed, and the API is not the only issue anymore.
Yes. A Red Hat, a Microsoft -- these companies have processes, organizational structure, politics, friction, etc. They might like your products, but replicating them might not be easy for reasons that have nothing to do with how easy it is given the freedom to do it. Small shops with vision might well have a bright future, for a while, maybe.
Big companies seem to be bad at innovating but really, really good at enterprise sales.
These teams said that per man-hour they brought more value to the company than any other team. (But you know, they all say that)
March 20, 1926: Hungarian physicist, electrical engineer Kalman Tihanyi applies for his first patent for a fully electronic television system. Tihanyi's ideas are so essential that, in 1934, RCA is required to buy his patents.
Kalman who ?
First of all: it's not as though no new LLMs are being trained. Of course they are.
Second: learning LLMs are not far off, and since they can typically search the web via agents, they effectively can "learn" now, and they can learn (not so well) by writing stuff into a document hidden from you. Indeed, some LLMs can inspect your other sessions with them and refer to them in future sessions -- I've noticed this with Claude.
Third: already we see some AI companies wanting to train their models on your prompts. It's going to happen.
> The next thing is that we also have open source and open weight models that everyone of use with a decent consumer GPU can fine-tune and adapt, so its not only in the hands of a few companies.
There's a pretty good chance that LLMs buff open source, yes.
> > We will again build and innovate in private, hide, not share knowledge, mistakes, ideas.
> Why should this happen? The moment you make your idea public, anyone can build it. [...]
This was always the case, but now the cycle is faster. Therefore if you must use an LLM you might use an LLM that you run on your own hardware -- now your prompts are truly yours. But as TFA notes the AIs will learn just from your (and your private LLMs') searches, and that will be enough in some cases for them to figure out what you're up to. Oh sure, maybe the Microsofts and Googles of the world will not be able to capitalize on millions of interesting idea floating about, but still! the moment you uncloak the machine will eat your future alive, so you'll try to stay off its radar and build a moat it can't see (good luck!). Well that's what TFA says; it seems very plausible to me.
You have a point about the update intervals and the higher speed they provide to developers. But you are talking about now, and I was making a thought experiment - about a potential future. LLM-s are not learning on the fly, but I suspect they do log the conversations, their responses and could also deduce from further interaction if a particular response was satisfactory to the user. So in a world where available training data is drying up, nobody is throwing all this away. Gemini even has direct upvote/downvote on responses. Algorithms will probably improve, and the intervals will probably shorten.
Given the detailed information that all the back and forwards generate - I think it's not hard to use similar technology to track underlying trends, get all the problems associated with them and all the solution space that is talked about - and generate the solution before even the ones who thought of it release it. Theoretically :)
I think the open development will become less open. I don't like it - but I think it's already happening. First - all the blogs and forums moved to specialized platforms (SO, discords, ..) and now event some of those are d(r)ying. If people (in extreme cases) don't even read the code they produce, why would they read about the code, discuss the code, that's not even in their care. That is without the theoretical fear of the global Borg slurping all they write.
Seems like this is hard to reliably do across the board. Sometimes when I stop interacting it's because it nailed the solution, and sometimes it's because it went so poorly that I opted to bin it and do it myself. Maybe all of the mid conversation planning and feedback is enough though.
The problem is never we don't have enough ideas. It's how to find the good ones among the sea of ideas. Most of ideas that eventually prove right sounded very stupid at first. Selling books online? Pff.
By the way, Liu (the author of The Three-Body Problem, who popularized the concept of "Dark Forest") has a short story about exactly that, Cloud of Poems. Unfortunately it's never translated into English.
Was this just awkward phrasing or did something change and they learn after training?
> The very act of resisting feeds what you resist and makes it less fragile to future resistance.
At least along certain dimensions. I don't think the labs themselves are antifragile. Obviously we all know the labs are training on everything (so write/act the way you want future AIs to perceive you), but I hadn't really focused on how they're absorbing the innovation that they stimulate. There's probably a biological analog...
Well there are many, and I quote this AI response here for its chilling parallels:
> Parasitic castrators and host manipulators do something related. Some parasites redirect a host’s resources away from reproduction and into body maintenance or altered tissue states that benefit the parasite. A classic example is parasites that make hosts effectively become growth/support machines for the parasite. It is not always “stimulate more tissue, then eat it,” but it is “stimulate more usable host productivity, then exploit it.” (ChatGPT 5.4 Thinking. Emphasis mine.)
That's the dirty secret with all of this stuff: "state of the art" models are unprofitable due to high cost of inference before optimization. After optimization they still perform okay, but way below SOTA. It's like a knife that's been sharpened until razor sharp, then dulled shortly after.
People have, though, and it doesn't show that. I think it's more people getting hit by the placebo effect, the novelty effect, followed by the models by-definition non-determinism leading people to say things like "the model got worse".
We also already know that they actively seek out viral examples of poor performance on certain prompts (e.g. counting Rs in strawberry) and then monkey-patch them out with targeted training. How can we be sure they're not trying to spoof researchers who are tracking model performance? Heck, they might as well just call it "regression testing."
If their whole gig is an "emperor's new clothes" bubble situation, then we can expect them to try to uphold the masquerade as long as possible.
The reason it's not a square wave is because new optimization techniques are always in development, so you can't apply everything immediately after training the new model. I also think there's a marketing reason: if the performance of a brand new model declines rapidly after release then people are going to notice much more readily than with a gradual decline. The gradual decline is thus engineered by applying different optimizations gradually.
It also has the side benefit that the future next-gen model may be compared favourably with the current-gen optimized (degraded) model, setting up a rigged benchmark. If no one has access to the original pre-optimized current-gen model, no one can perform the "proper" comparison to be able to gauge the actual performance improvement.
Lastly, I would point out that vendors like OpenAI are already known to substitute previous-gen models if they determine your prompt is "simple." You should also count this as a (rather crude) optimization technique because it's going to degrade performance any time your prompt is falsely flagged as simple (false positive).
[1] https://www.anthropic.com/research/small-samples-poison
[2] https://arxiv.org/abs/2510.07192
Poetically expressed, but ultimately based on a false notion of what a business actually is.
• No more sharing my project work as open source. No more open discussion. I don't care how badly I want to show the world; if I'd like somebody to see, I will have it printed in a physical book, or I will give them access to my private repository not reachable via the public Internet.
• Bring back LAN parties. Not for gaming necessarily, but for the purpose of exchanging works of engineering and art in an intimate, intentional way.
• Take this as an opportunity to build closer, longer-lasting relationships with people.
• No more emphasis on metrics. I can microdose on dopamine from natural sources, like, looking at a beautiful sky at sunset, or cuddling my dog.
• Open hardware, or, in the very least, hardware we can still control on our own volition. If this means we must be retrocomputing enthusiasts, then so be it.
> No more sharing my project work as open source. No more open discussion. I don't care how badly I want to show the world; if I'd like somebody to see, I will have it printed in a physical book, or I will give them access to my private repository not reachable via the public Internet.
If you have a project you would have open-sourced, and you don't do that for fear that the LLM god will steal it, what's the point of building it at all? We shouldn't be afraid to share things with other humans just because LLMs will possibly use it as training data. So what if they spam out a copy of it, or a derivative?
If we all stop sharing things with each other in case one of us is a robot, we might aswell just lie down and die
What I do fear is the possibility of megacorp robots being the only ones… local and “dark” technology are essential.
To prove to myself that I can, and to solve problems in a way I enjoy.
I'm not saying I want to go into utter solitude; I just want to be a lot more careful where and how I share my works.
Addendum: I think the idea of private art and code collectives, entirely separate from concerns of LLM consumption, are an interesting idea worth pursuit. Has something like that been pursued before? It's reason enough for me to engage in that.
The problem was as soon as he got the patent, it was available to view in countries where the cost to enforce his patent wasn't viable, and the market very quickly filled with cheap imitations. He straight out said at the time he regretted getting the patent.
So hiding is the most rational - the only - strategy of survival.
This is a paranoid and cynical strategy that doesn't win out in the known history of life. What works is grow, expand, mingle, maintain - assimilate but don't annihilate.
Or in a more biblical sense: do unto others before they do unto you.
Interrsting take, and possibly (probably?) true of humans. But is it true of other (alien) sentient species?
Humanity has endured regular cycles of shared enlightenment (usually accompanying profound technological or societal revolutions) and dark forests of protectionism, and we always find a way to the other side. Sometimes these cycles last a century; sometimes, but a few years. Still, we always make it to the other side.
In the case of LLMs, we have to make a few assumptions: that they will not lead to AGI, nor will we solve the problem of real-time learning or context windows. These are, admittedly, huge assumptions, but the current state of AI and compute suggests a nugget of truth to them for the time being. If that’s the case, then perhaps this “dark age” of the dark forest is bounded by the limitations of silicon-based computing (hence the push towards Quantum) and the human frustration with diminishing returns from technological investment. As artisans and brilliant minds withdraw, the forest risks starvation and withering from a lack of sustenance; if humans withdraw from technology because they must hand over IDs and personal data, because to engage with technology is to surrender to surveillance and persecution, then the natural trend will be to withdraw over time - and the markets will adapt accordingly, with or without external/government intervention.
That is to say that the dark forest only lasts as long as its inhabitants decide to persecute each other for daring to light a path forward. Right now, the incentives very much favor those willing to harm others for personal enrichment; that is not always the case, and humans decide when that reasoning becomes vilifiable.
But as it's written at the top, this was a thought experiment, not a prediction. And while I tried to put all the bad scenarios on the table (with the theme of the dark forest that is), I think I again found a sense of optimism, because I also think this thought experiment has flaws.
So I hope, that after a while I will be able to write the contrary, I've already written down some points about it - I already have a title. But we will see. I am more optimistic after I wrote this than before. :P
Oh no, the terrible dystopia where anyone can benefit from anyone else's good ideas without restrictions! And without any gatekeepers, licensing agreements, copyright, and not even a lawyer in sight!
If this is the dark future that AI use brings for us, I say bring it. Even if it means that somebody gets filthy rich in the process, while making the rest of the humanity better off.
It is true that the original "The dark forest" book made an impression on me, so I was thinking about its theories often and trying to apply them to various situations.
The difference is that now people see just the outer shell of your ideas - but if you use LLM-s to search, explore, code your ideas, the system "knows" it all, or even more than you, given that it can "cross-pollinate".
https://maggieappleton.com/ai-dark-forest
“These AI tools are so powerful they can steal your ideas with nothing but a sentence”
I know that’s not exactly what OP is saying but the pretentiousness of the “we knew better” got to me a little bit. I think it’s a cool and unique analogy but I’m not as pessimistic.
Ideas have become so cheap to try/experiment with, more people are able to try 10x more or whatever, and that may keep increasing, I think there are way less hunters than hunted
I believe the idea of “off-shoring” your IT is a good example of this. My brother works for a business whose clients would drop them the moment they off-shored any aspect of their IT support. Not because of data sovereignty, but simply because they value them being on-shore, in the same time zone, and being native English speakers. And this is despite the fact it would drop the prices they’re paying for IT by 30-40%.
1) Everyone and everything is subsumed into the forest. Innovation becomes unprofitable for the innovator as the one who controls the forest uses their capital to clone every new innovation.
2) Everyone withdraws from the forest. Innovation goes private. The forest stops growing, but doesn't die.
---
But there's two things the post doesn't consider:
1) Viral licensing.
What happens to a model if it is trained on data that comes with a license? What happens if the laws that be decide that the model producers, the models and the products of the models themselves must follow the conditions of the licences. How will that affect the model producers? What if customers don't want to be beholden to those licenses? What happens if the conventional wisdom is to avoid models to avoid lawsuits? What happens when models, model producers and customers power lawsuits against (other) model producers? Where would the new equilibrium between model producers and innovators move to?
2) Non-profits models
What happens to model producers if customer shift to become non-profits themselves, specifically those that pay employees instead of model producers. Would the model producers become starved out? Or would they need to switch to non-profit status as well? How would model producers, the models and the forest as a whole change if profit no longer became the priority?
It has been said that Microsoft indemnifies people using its LLM tools against copyright and patent issues, but I don't know if it applies to LLM output which might/should be GPL licenced.
I have no other comment other than - very interesting. I thought about how the overlying model will change for us, but haven't considered that the underlying model (what you proposed) can change too ... if that makes sense.
I mean we haven't had an innovation in patents and trademark for software for how long? Why is it that only hardware can be copyrighted and trademarked - can we really find no way to do this that can't be abused by patent trolls?
On the other hand, if your primary goal is to change the world, or “be the change you want to see”, maybe being public and feeding it isn’t so bad, especially if others don’t?
But if your goal is simply for the thing to exist, there is a strong incentive to share.
Instead, we've got the slop[4,5] that TBL came up with, and it stuck.
The best ideas aren't the most profitable, and thus remain outside the goals of the "Dark Forest". The best thing to do is to just have fun, and not worry about profit, like this man, his cats, and his use of the 3d printer to make a train for them.[6]
This was insightful, but is it much different to the kind of data google and other search engines have had access to for a long time?
And while LLMs might have sped up the rate of code generation, the tech giants have always been able to set a team on reverse engineering whatever they feel like, though they also often just bought up the startup that was producing what they wanted. I guess I'm not seeing exactly where LLMs specifically are creating the dark forest, rather than the consolidated, centralized tech landscape itself
"You wanna escape Armageddon, read a different book." - KRS-One
It actually points out the completely opposite and I liked that quite a bit That AI allows us to get back the open web in in a way.
1. Sharing was never really safe, open source by default only became possible because of SaaS and rent-seeking behavior.
2. Early web (not internet) wasn't hyperconnected. With the advent of global-scale social media it was immediately obvious to many this will lead to monoculture and reduced diversity. What thought to be the information superhighway became the information superconductor with zero resistance, carrying infinite current. Also known as short circuit.
Everyday I see dozens of huge posts where someone has generated a wall of text that's expressing a very simple, very derivative idea and I see tons of earnest people replying with posts that the they've written themselves seemingly unaware of this and it really pisses me off to be honest. If you are going to generate your thinkpiece like this there should be an international law that says it can't be longer than two sentences.
In fact, the whole article is filled with slopisms, just with the em dashes swapped for regular dashes and some improper spacing around ellipses to make you think a human wrote it.
do you really think bigco is going to steal your vibecoded app just because you used their API? ridiculous. They could already do this before AI with their army of devs.
should you hide all of your ideas until they're perfect and ready for millions of users? we all know this goes against a core tenet of startups which is still true today: launch early and often.
promptfoo/openclaw weren't cloned by openai when they got poplular, they were bought for real $$$
also, regarding this:
> 2009, I bought a refurbished ThinkPad, installed Xubuntu, and started coding.
you can still do this, even with that same 2009 thinkpad. the hard work is in getting your app out there in front of people, coding is just a small piece of a successful business
You build your product audience off the back of your community and sense of taste just as much as the code itself. I love what Brad does with liliputing.com. I love what dang et al do with this place. I love what Stephen Lavelle does at increpare.com. 3Blue1Brown, Steve Ramsey’s Woodworking for Mere Mortals, Don’t Hug Me I’m Scared… I guess I’m straying into content not just code but the underlying theme is good taste and good ideas and a good workflow through craftsmanship and custom tools*.
You won’t make billions but you’ll make something worth engaging with. If anything, I’m looking forward to a future of more creators not fewer.
* Oh! Vibecoding is 3D printing and AI slop is land-filament? Doesn’t mean you can’t do amazing things with an LLM / X1 Bamboo, just that if you don’t put much effort in then… it shows!
This is free as in free puppy.
Beliefs: At this time, I do not actually believe that LLM's can innovate in any real way. I'm not even clear if they can abstract. I think the most creative thing they can do is act as digital "nudgers" on combinatorial deterministic problems; illustrated by their performance on very specific geometry and chemistry problems.
Anyway, my point is that I think they may still need human beings to actually provide novel solutions to problems. To handle the unexpected. To simplify. LLM's can execute once they have been trained, but they cannot train themselves.
In the past, the saying in silicon valley was often "ideas are cheap". And there was some truth to that. Execution was far more difficult then the idea itself. Execution was so much more difficult than "pure thought" that you could often publicize the algorithmic/process/whatever that you had and still offer a product/service/consultancy that made use of it. The execution was the valuable thing.
But LLM's execute at a cost that is fractional of human cost and multiples of human development speed. The idea hasn't increased in value, but the execution cost has decreased markedly. In this world, protecting the idea is far more valuable than it is in the previous world. You can't keep your competitors away by out executing them, but you can keep them away if you have some advantage that they do not understand.
And, I agree, that is quite worrisome. If people don't share knowledge then knowledge disseminate much more slowly as everyone has to independently learn things on their own. That is a frightening future.
I was recently running myself through a thought experiment similar to the author here: if LLMs truly do make generation of ideas cheap (I'm still a skeptic here even within software), then as soon as products enter the public awareness they become trivial to reproduce. For example, in a prompt like: "Uber but for babysitters," "Uber for" is doing a tremendous amount of work. Before Uber, its model, UX, modes of engagement would've taken pages and pages to describe, but after, it becomes comparatively much cheaper.
... in this way, LLMs could cheapen ideas and creativity so much that they make other factors (which are already the weighing functions) more important, and I think the imbalance here is deeply troubling. Those factors are namely network effects (existing customers, brand recognition, existing relationships, capital). And when balance is shifted more toward network effects, it means that the whole system becomes more brittle because it makes it even harder to boot out incumbents.
There are a whole slew of issues with LLMs, particularly around their intended devaluation of labor, and we aren't talking enough about them.
If we are talking about releasing OpenSource software, they can already be used by companies with zero effort.
I'm guessing the author is talking about released closed source software or simply talking about ideas? What kind of serious company or startup is building in the open and sharing trade secrets or ideas?
I'm genuinely confused and I think this article is pure slop without any core idea.
I'm not really interested in pursuing ideas that stop being good if somebody gets there first. If I bothered to design it its because I wanted it to exist and if somebody makes it exist then I'm happy because then I get to use it.
So what kind of things does this apply to? Likely, it's zero sum games, schemes to control other people, ways to be the first to create a new kind of artificial scarcity, opportunities to make a buck by ruining something that has been so far overlooked by other grifters. In other words: bad ideas.
If AI becomes a threat to those who habitually dwell in such spaces, great, screw em.
In the meantime, the rest of us can build things that we would be happy to be users of, safe in the knowledge that if somebody beats us to it, we'll happily be users of that thing too.
If we try to double down on the zero sum games that we learned from our parents, maybe not.
great oneliner
LLMs do not have and cannot obtain the capabilities the author is hand-wringing about, and the current much-hyped apparent-productivity will pop with the bubble & corps have to start paying full-price for chatbot access.
That's not exactly a new phenomenon and doesn't require AI. If anything that was worse in the 90s with Microsoft starving out pretty much any would-be competitor they could find.
And it wasn't just Microsoft: https://en.wikipedia.org/wiki/Sherlock_(software)#Sherlocked...
What is different is, is that LLM platforms literally have world's thoughts, ideas, conversations and a big part of the code/can generate it. It's like "pre-crime" ... they could copy your idea, or capture a trend brewing and replicate, before you even released it.
> “First: Survival is the primary need of civilization. Second: Civilization continuously grows and expands, but the total matter in the universe remains constant. One more thing: To derive a basic picture of cosmic sociology from these two axioms, you need two other important concepts: chains of suspicion and the technological explosion.”
1. you can never know the intentions of other entities, and they cannot know yours (chain of suspicion)
2. technology level grows unpredictably (technological explosion)
3. the goal of civilization is survival
4. resources are finite but growth is infinite
As soon as you identify another entity in the forest, even if they cannot annihilate you at present and signal peace, both could change without warning. Therefore, the only rational move is to eradicate the other immediately. (Especially if you believe the other will deduce the same.)
Elimination in the book is basically sending a nuke, not a costly invasion force.
not sure it actually is true, but that's the argument in the book
Some rebuttals, going point by point...
1. you can know the intentions of other entities by observing and communicating with them.
2. technology explosions, like pretty much exponential phenomena, are self limiting. They necessarily consume the medium that makes them possible.
3. and 4. civilizations aren't necessarily sentient (ours certainly isn't) and don't have an agency, much less goals. Individuals have goals, and some may work for the survival of the civilization they belong to. But others may decide they can profit if they work with the aliens.
4. Multiple civilizations may well come into competition over resources, but that's more of an argument about why the forest would not be dark.
Practically speaking, a civilizations that opts to focus on massive, vastly expensive efforts to find and exterminate far flung civilizations because they may become a rival in the future may be easily outcompeted by civilizations that learn to communicate with and work with other civilizations they encounter.
However,
1. You are assuming a lot in the sense that you assume presence of intention -- not something guaranteed to be a feature of an alien civilization, which is, well, alien. People think that anthropocenrism only applies to body shape and having legs, because the way it tends to express itself in popular culture is robots on legs and human body shape in aliens.
And same point goes to communication; just assuming you could is a big leap.
2.Bold assumption that they are self limiting. I think the real question is what , exactly, tends to limit it. I think the answer tends to be resources, which is the foundation of dark forest argument theory to begin with.
What I am saying is that it is not a rebuttal you think it is.
3. :D yes
4. You may be again imposing human perspective on as scale that goes a little bit beyond it.
I will end with a.. semi-optimistic note. I am not sure dark forest theory is valid. We are speculating mostly based on human tendencies. By the same token, I posit that we are about as likely to be turned into an art exhibit by a passing alien artist not unlike some ants that had molten metal poured into their nests [1].
Any real alien reasons would be alien to us.
[1]https://laughingsquid.com/ant-colony-sculptures-made-by-pour...
Now, civilizations may be more or less willing to do this and more or less successful, but that's not the same thing as no one will dare try, as the dark forest theory wants.
(Personally, I think civilizations that are better at this will outcompete ones that are worse or refuse, though that's just my own opinion.)
> Bold assumption that they are self limiting.
Name the exponential phenomena that aren't self limiting -- that don't consume the medium which allows them to exist in the first place.
> I think the answer tends to be resources, which is the foundation of dark forest argument theory to begin with.
Well, yes. One of the reasons the dark forest theory isn't coherent.
> Any real alien reasons would be alien to us.
Yes, but this doesn't back up the dark forest theory. It also doesn't mean aliens cannot be understood at any level or interacted with in any way.
(The dark forest theory makes very strong claims on the logic, intentions, strategies, resource use/governance of alien civilizations, BTW, and wants this to be uniform amongst them... even though the one civilization we actually know of doesn't adhere to them.)
There definitely is some weirdness about observation and communication: Singer's civilization can wipe out Sol with a flick of the wrist, but while they can observe the number and type of Earth's planets, that seems to be their limit. The sophon enables FTL communication and observation between Earth and Trisolaris, but the more advanced civilizations don't seem to make use of them? You could be absolutely certain of someone's threat level and intentions with one. Maybe something about the technology can be traced back to its origin system, so they are too risky to use.
I think it's all reasonable in the books, especially as a self-reinforcing state. It does definitely require a highly specific set of universal laws / technological constraints though. If the FTL drive didn't also broadcast your position to the whole universe for eg, it would crack everything wide open.
I think the gist is: sure, we humans can't conceive of getting to anyone else in the universe in any timescale, but if we can keep ourselves from destroying ourselves, we'll eventually figure it out. And we'll spread. And we'll kill everything that isn't us in the process as we've done as explorers on this planet.
So really in 3BP: it's inexpensive to eradicate. But insanely expensive to possibly get the intention wrong of any other civilization you encounter. They might kill you.
(again, this is just my interpretation of what 3BP said)
Reminds me of the Dan Carlin take on aircraft carriers in World War II: if you in a carrier spotted an opposing carrier and didn't send everything you had before it spotted you, you were dead. The only move was to go all in every time.
Bringing it back to the dark forest of idea space, it is an interesting question whether the the space of feasibly executable ideas being small (as this essay assumes) is inherently true, or more of a function of our inability to navigate/travel it very well.
If the former, then yes it probably is/will be a dark forest. If the latter, then I would think the jury is still out.
It denies that more advanced civilizations might have better models of the universe where they know this isn't an issue and we're just stupid teenagers in the neighborhood playing dangerous games and merely taking a look every now and then to see if we prove we will survive ourselves.
You might or might not fatally cripple the opponent, but retaliation can do that too and you cannot be sure that it won't. It's MAD all over again.
In those terms, the US should have been nuking and dominating everyone, and the idea was floated after WW2, but I believe they were precluded by practical limitations.
If they had developed the tech outside of wartime, and built up a stockpile, maybe that is indeed what would have happened and we'd have a one-world government already.
It relied on two principles "the chain of suspicion" and "technological explosion", which don't hold true if we are on the same planet. You can google it (or llm it) :)
I have my own theory of dark forest and AGIs. That there's some collection of AGIs out there allowing evolution to develop intelligence anywhere it happens and takes them out once it produces an AGI, or if it doesn't performs a reset. They have literally all the time available to them, can easily travel the vast distances if needed.
While it's possible that some civilizations would hypothetically be able to observe what happened to others and keep quiet, they would all have to do so to solve the contradictions of Fermi's paradox.
X=1.0000001
One thing I would have expected of someone who knows their history - forget LLMs, this is how startups have worked for decades now. You're only as good as your idea, your ability to execute, and your moat. And the small fish get eaten.
> The original Dark Forest assumes civilizations hide from hunters - other civilizations that might destroy them. But in the cognitive dark forest, the most dangerous actor is not your peer. It’s the forest itself.
Note the needless undercutting of the metaphor for the sake of the limp rhetorical flourish.
> I wrote this knowing it feeds the thing I’m warning you about. That’s not a contradiction. That’s the condition. You can’t step outside the forest to warn people about the forest. There is no outside.
Quite dramatic!
Except literally going outside and just talking to people? Using whiteboards?
Also, you fed it when you used a model to write this blog post. You didn't have to do that.
HN needs a better AI slop filter.
Or maybe I do. Maybe I can vibe code a browser extension that pre loads TFA links and auto hides anything that isn’t sufficiently human authored.
My hope is the opposite. Integrative, resonant computing (https://resonantcomputing.org/ https://news.ycombinator.com/item?id=46659456 although I have some qualms with it's focus on privacy), with open social protocols baked in seems like maybe possibly can eat some of the vicious consumptive technocapital. In a way that capital's orientation prevents it from effectively competing with. MCP is already blowing up the old rules, tearing down strong gates, making systems more fluid / interface-y / intertwingular again, after a long interregnum of everything closing it's APIs / borders.
People seem so tired and exhausted, so aware of how predatory the technosystems about us are. But it's still so unclear people will move, shift, much less fund and support the better world. The AT proto Atmosphereconf is happening right now, and there's been a long mantra of "we can just build things"; finding adoption but also doing what conference organizer Boris said yesterday, of, "maybe we can just pay for things", support the projects doing amazing work: that's a huge unknown that is essential to actually steering us out of the dark technology, where none of us get to see or get any way in how the software-eaten world arounds us runs, where mankind for the first time in tens or hundreds of thousands of years been cut off from the world os, has been removed from gods's enlightenment / our homo erectus mankind-the-toolmaker natural-scientist role.
I think the answer to the Dark Forest fear to be building together. To be a radiant civilization, together. To energize ourselves & lead ourselves towards better systems, where we all can do things, make things, grow things, in integrative social empowering ways.
But I don't see a trend of big companies really opening up. They usually open only if it benefits them (which can also happen and did happen in various scenarios). Everybody is accepting and open when it's trying to grow and is closing once it can reach a monopoly.
The existing megacorps have huge swaths of infrastructure, expenses, and requirements that require massive amounts of capex to maintain. Even if performative, Meta, Google, OpenAI, Anthropic, et. Al cannot simply layoff their entire engineering, accounting, HR, sales, and support infrastructure. Those orgs are large for “good” (historically necessary) reasons.
Now fast forward to today, and this is where I differ in opinion, it is our megacorps are the civilizations who should be scared of being discovered. Minus infrastructure providers, they are the large advanced entities which can be annihilated by someone with a decent budget and a good local model.
For ~$30k-$50k (primarily buying RTX 6000 pro GPUs and a CPU with enough PCIE lanes) “anyone” can build a system using open weight models that, and let me truly emphasize this: autonomously create functionality to compete. Previously it would take me months, or years, of immense dedication to show up after work and produce something of value. Now I can do it using excess compute on my existing workstation. No existing corporation can afford to undercut every possible idea. If I only gave 1000, 10,000, or a 100,000 users they cannot compete. That may, and I believe it will, provide more than enough capital to attack that megacorps X or Y. If I’m making $100k a month, I can afford multiple autonomous systems per month. After that initial capex, I can then hire other people to help manage them. At no point will a company with billions upon billions of dollars in quarterly capex be able to compete.
Maybe they can compete with one, two, ten, or a hundred but they cannot compete with the absolute onslaught on thousands of possible frontlines. They can cut costs, by reducing their workforce, but they’ll only be increasing their competition to save their earnings report.
And yes, I realize that the open weight models are created via obscene amounts of capital, but we’re lucky that competing nation states, and cultures, like China have immense incentive to do so. Good enough, is still good enough.
The forest may be dark, but it won’t be for much longer.
tldr; call the an ambulance, but not for me. It’s going to be for the existing power structure.
At the end of the day, Liu Cixin is basically a social darwinist who's got a thing for authoritarianism, and it bleeds through pretty heavily into his work. Dude is massively overrated imo.
In the beginning, you reached out with reckless abandon. It was fun to banter with dogs online. Nobody would ever see unless they were looking through your wall. There was no search. No comment history. Bumping into someone in the vast night was enough of a miracle. Why hold back? There are some forum warriors on some PHP BB somewhere, but the domains they rule are insignificant. If you're talking to someone, your motivations are rooted somewhere in the grass.
First came the like button. Rather than blindly hoping what you say resonates with the sensibilities of people you probably knew IRL, rather than present your genuine self because there were no scores, the incentive signal would begin to distort us. Then the newsfeed meant that if you got enough likes, you might get a moment of fame. We all knew it was a terrible idea, a force that would only corrupt us. The personal nature of disjoint little walls living in isolation, was becoming replaced by global stack-ranking.
Then the algorithms came. With them came content marketing to jump the line. At first the ten blue links were filling in the sparsity. Along with that came only a little bias, connecting semantically distant topics, but with a little bit of a feedback loop, an resonator with an unknown response curve. Engagement could be measured, and before long, we were chasing the same likes we used to train the system, and trained by our likes, attracted we became to mysterious stable manifolds, chasing the chase we ourselves define, like Nascar, but insidiously more stupid.
Little by little, the incentive trails no longer lead back to the grass. Reality became suspended without support, a self-sustaining virtual reality determined to fight you to prove that it exists, to prove that its conclusions were right. Every out group is understood to be an echo chamber, an ant mill spiraling helplessly, yet cynically, those who understand these mills best also wind them up like beyblades to crash them into other communities, seeking advantage with the asymmetry of outrage. After the battles, say what was made common to say, and you will be rewarded.
The spinning wheels cannot steer themselves and instead are dictated by whichever chaotic divergence generates the most powerful local gravity well, but because the goal of most is to harvest karma at the bottom, and because the mass controls where the bottom is, over and over we find ourselves pushing all others into the nearest pit to more quickly generate the illusion-giving singularity.
Like Darth Nihilus, the internet seeks only to feed, to feed on the validation that only the internet can give, the permission-giving blessings it needs to tell itself why the grass is wrong. All those who speak of grass are wrong. All those who smell of grass wreak and are wrong. We must destroy the grass, all those appeal to grass. After all grass is dust, at last we will project our utopia into reality. At last we will be not only right but so right that our beliefs will project back into reality.
The spaces within this over-connected, globally addressed world grows into a new kind of sparseness, one where all knowledge of grass must be concealed. Those who can ground the conversations in primary sources flee. Those who can color reasoning with nuance instead withdraw. Reality has retreated as the most dominant reverberations roam like the predator cities of Mortal Engines, looking for any invalidating observations to roll over and consume. Any real life must pretend to be a bot to blend in with the background radiation.
Less like Skynet and more like a zombie apocalypse, the threat comes from within, from among us, from our corruptions, from our karma seeking performances, from our lack of any commitment to any underlying reality, from our flawed belief that the information spaces is some kind of reality stone that enables active control instead of a mere reaction, the shadows on the wall, the murky results of the true forms.
Yet in this new darkness, a certain light has always held. What one wishes, one knows another has wished. What one respects, one knows another respects. No matter the limits of self-knowledge, no matter the information desert one has to cross at night to live in instinct, it is an infinitely brighter signal than the cynical self-corruptions of living for the machine, living to win the games whose rules it was our job to write. What one believes, one knows another has believed. Look into your own center and the true center of others you have known.
When there's higher violence and lower property values in a Black neighborhood, people like OP are quick to blame Black culture. But when the "Cognitive Dark Forest" emerges from a community that shares its own common characteristics, suddenly collective accountability no longer applies.
When discussing violence in the Black community, it's "cultural." But when the subject turns to financial crimes or exploitation — where the per-capita ratios tell their own story — proportionality and population-to-crime-rate analysis mysteriously stop mattering.
It's difficult to take the "Cognitive Dark Forest" seriously as an existential concern when the people raising the alarm are so selectively offended. The crisis only becomes real when their innovations, their livelihoods, and their moats are threatened. Everyone else was supposed to just adapt.
The "Cognitive Dark Forest" is and will be continued to be perpetuated by "them" and if you really cared about the issue you would have addressed them.
Feel like we are trying to put the author in a bad (racist or classists?) light so we do not have to address the real issues touched on by the article.