As much as I'd like to avoid criticizing "moore's law", this is a good example of why we shouldn't be using the term unless the effect described is a properly scientific description of a universal truth.
Both of these statements are mere observations of trends (short term ones in the grand scheme of things). Nothing about the trends observed in "eroom's law" must hold true in the future and, similarly, nothing about the trends observed in "moore's law" must remain true in the future.
There are no causal links behind either of these "laws", nothing to make them universally applicable throughout time. Indeed, both are stated as functions of time with an implied start date. These qualities make both mere observations of trends rather than proper scientific laws.
In the case of Moore's "law" the implied outcome seems optimistic and I think that is why we have so far given a pass on calling it a "law". In the case of Eroom's "law" the implied outcome is much more pessimistic so I am betting that we'll be unlikely to want to recognize it. However, if we accept Moore's observation as a "law", we'll have a hard time pointing to a distinction that prevents Eroom's observation from being a "law".
In my opinion, the right answer is to stop implying that either of these observations are inevitable or representative of a truth about the universe. Then we can properly recognize that both trends are easily subject to disruption through hard work and breakthroughs in related fields, or the lack thereof.
Anyone know if this applies to antibiotics? I think a lot of people, myself included, are hoping that new drugs will buy us enough time to solve the sociatal problems (e.g. heavy use in ag) causing antibiotic resistance.
I’ve heard a theory that says the era of small molecules targeting single complexes is probably on the sharply downward slope of an asymptote. If we’re lucky that will just mean new therapies which target complex systems rather than single-drug-targets is in sight. If we’re unlucky there will be a harsh gap between the two eras.
sounds like NASA and governmental space programs. I mean it feels like the cost and complexity of, for example, flying to Moon has been doubling each decade or so since the original Apollo days and have reached today the levels of practical impossibility. (And that state of things set the stage for Musk).
Part of me wonders if things could improve if the resources put towards "drug discovery" could go further if they were allocated to tooling to allow for individuals to synthesize drugs tailored to their present state.
With increasingly more data available on the drugs released by FDA, EMA, etc, combined with cheap genetic sequencing and other measurements, part of me wonders if the statistical approaches with some sub fields of chemistry combined with bioinformatics could move things from the era where certain drugs are produced in mass to one where getting certain interactions with compounds to take place based on and individuals state.
Razib Kahn talks about this recently:
"There’s a debate that periodically crops up online about the utility, viability, and morality of returning results from genetic tests to consumers. Consumers here means people like you or me. Pretty much everyone.
If you want to caricature two stylized camps, there are information maximalists who proclaim a utopia now, where people can find out so much about themselves through their genome. And then there are information elitists, who emphasize that the public can’t handle the truth. Or, more accurately, that throwing information without context and interpretation from someone who knows better is not just useless, it’s dangerous.
Of course, most people will stake out more nuanced complex positions. That’s not the point. Here is my bottom-line, which I’ve probably held since about ~2010:
- The value for most people in actionable information in direct-to-consumer genetics is probably not there yet when set against the cost. - With the reduction in the cost of genotyping and sequencing, there’s no way that we have enough trained professionals to handle the surfeit of information. And there will really be no way in 10 years when a large proportion of the American population will be sequenced.
Moore's law was quite an outlier in the history of technology, and one thing that kept it going is that the returns kept up with the cost of the increasingly complex technology that it demanded.
The paragraph that dismisses the 'low hanging fruit' explanation of Eroom's law looks rather weak to me - it says there are still many potential targets, but does not consider their technical feasibility, cost or potential ROI.
To some degree isn't this expected? I mean, if real gdp growth is 1 or 2 percent per year why wouldn't some of that go to drug prices? Plus, why wouldn't this just be a preference thing?
I think the underlying assumption is that more money is being spent for drugs that are equivalent. Is that reasonable or provable assumption?
This was a superb Wikipedia article and clever name. I particularly like the Beatle's reference.
„The cost of developing a new drug roughly doubles every nine years.“