@grumplet Thank you. I think there are several flaws with this reduction to background mortality. I have followed similar arguments about COVID-19 in the beginning, but 1. mortality is only part of the story, many more people who get sick will remain with reduced lung capacity 2. looking at the last major pandemic of 1918 the virus massively evolved over a year and came back two times better adapted, even affecting many people who where immune from the previous wave. So we do not really know what is going to happen over the year, but we can do estimates on assumptions from the development so far. I agree that it looks as this is not going to kill a large part of the population.
The 1918 H1N1 virus is basically entirely different from Cov2, so I don't think it has much to offer future predictions of Cov2. Cov2 is also around 8x larger genome (~34KB) and much of the machinery there is dedicated to error correction in replication. That means it should be (and currently appears to be) fairly stable. So, I would not expect much change over the course of a year.
Some more details on this from Trevor Bedford: https://twitter.com/trvrb/status/1242628550563250176?s=20
Some good info there. Bedford's noting that Cov2 mutation rate is 2-3 fold (4x to 8x) LESS than influenza fits nicely with what I understand about it.
Thanks @jsa-aerial! I was not aware of the genomic differences. That indeed suggests that the flu virus was able to evolve much more massively, I do not have a background in genomics though.
In our paper we have also used an SEIR model as Julia Gog is doing. Although I am not an expert, these models opt strongly on the side of mathematical tractability instead of a detailed understanding of what is happening on the ground. It is true that compartmental models can be blown up towards more compartments and a more fine-grained simulation of society, in fact this is something I am very interesting in researching in terms of inference in multifidelity models coupled to a simulator like FRED, yet alone they are not able to model complex non-smooth or discrete controls interacting with individuals in the population. I would say these models are not really state-of-the-art, but they are well understood and seem to be very popular for the mathematical branch of epidemiology. I am not an expert in epidemiology though and all disclaimers raised in our paper definitely hold.
And mortality spikes when societies run out of their health care system support even for younger cases of course. Italy has a 5% death rate not just because its population is older, but also because its health care system is exhausted. It was massively cut by austerity after 2008. So only if the disease is kept under control of the health care system the mortality will not spike. Winton's blog post does not really cover that critical point.
Exactly; Death rate is complicated because it depends on infrastructure, population demographics, access to healthcare, etc. However, in some sense, with how far some countries cough... cough... USA) are behind in testing, it can be helpful information to consider.
If you haven't yet, you might want to take a look at this group's extended SEIR model: https://clojurians.slack.com/archives/C0BQDEJ8M/p1585585173039700
^ Seems to get much closer to "what is happening on the ground"
Pardon me asking, hopefully no one will take it personally but as an engineer I have the following questions: 1. How do data scientists/researchers professionally contribute in fighting the pandemic? 2. Do those activities really make any difference "on the ground"?
pretty topical, the royal society has many data scientists and researches who work closely with the UK government to provide models for how different decisions might affect the pandemic, recently they had a call for volunteers due to how much work was required in order to model the best exit from self-isolation https://royalsociety.org/news/2020/03/Urgent-call-epidemic-modelling/ So whether we are stuck at home for a week of 3 months may be directly from those models, and whether that means we should expect a recurrence around october or not is all down to how well the modelling goes! 🤞
Depending on your definition on datascience. All the data available to run the forecast models (and to estimate the right parameters for the forecast models) is riddled with biases (changing testing criteria, testing availability, overflowing hospitals leading to jumps in death rate). Large part of not producing crappy COVID-19 papers right now (and I am afraid biorxiv and medrxiv is a bonanza of bad papers right now) is to inspect and clean data to get a good dataset from which to extract crucial parameters like serial interval, time between symptoms and death, case mortality rate. Good data gives a model a chance to be accurate and useful in predicting what the effect is of extending a lockdown, or opening schools.
Pretty much all government-level decision making right now is based on what epidemiological models say will happen if you do X. (I guess I call that a sub-branch of data science - its my field), combined with data on supply chain limits, hospital beds available, economic impact models.
But besides the people that actually have the ears of government decision makers (modelers at CDCs or similar groups), there are thousands of researchers publishing fairly irrelevant COVID-19 studies as far as on the ground right now goes, except when those papers are of good enough quality that they get picked up by the public health modellers. I would say that is a fair assessment.
(source - worked on figuring out the danger of the 2009 flu at a public health institute while it was still mainly limited to mexico [so you try to work with very limited, poor data], and gf right now is coordinating the research at another public health institute for this pandemic. Also, modeled pandemics and diseases for the last 16 years).