Charles Yang’s ML4Sci is a cool newsletter on AI and Machine Learning applications.
On #8 issue, his description of how AI powered Science as a Service brief us on how AI can be used to distribute science.
“AI-powered models are beating domain experts in protein folding predictions, speeding up scientific simulations, discovering novel antibiotics, and outperforming numerical weather models.”
All fine, but I want to bring attention to an underlying assumption in the SaaS: being a Service. Services measured by the results it provides such as more or less accurate its predictions. Perhaps more keen to machine learning terminology, one should read how accurate its classifications are.
Predictions are core to scientific development for a long time. Especially so in experimental science, the possibility of checking observations against predictions is an important part of what makes the theory falsifiable.
The novelty we see is that SaaS are not theories. Especially when we talk of Bayesian probability, big data, and deep learning. Often enough, small changes in the data bring very different results. Not to mention overfitting and other problematic predictive illusions.
So what? – asks the reader. So that instead of exchange of theories among scientists, science would evolve by sharing data silos. And instead of knowledge, we see scientific progress distancing human understanding of the nature of the universe.
If the apparent difference is irrelevant when we are trying to predict rainfall, how about economics or biology? Understanding the mechanics behind predictions in these fields may be as important as prediction accuracy.
In a broader picture, this may also lead to questions such as: if science becomes a service, wouldn’t we drive fast to a monopolistic scenario on science as we see in most data intensive AI business?
I will keep following both argument line in posts to come. For now let’s pay attention on those interesting developments.
And by all means, give ML4Sci a try.