This was a great read - a few points stuck out to me, not necessarily particular to this study.
I applaud the authors for making their data and code available. This is the direction that open seismology should be going. This is particularly important when using a real-time earthquake catalog such as the ANSS ComCat catalog.
As you mentioned, ComCat is updated and curated continually, contains event information from many different sources (e.g., regional networks and the USGS), and is, therefore, dynamic. Maybe in the future, there can be better versioning of this catalog, but as it stands, including the earthquake catalog underlying the research is an important step for reproducibility.
This brings up another point: earthquake catalogs (estimates of magnitude and location) can be dynamic for many reasons. A set of seismic stations could go down, changing the network configuration, the underlying algorithms to compute locations and magnitudes may change, the normal duty seismologist who processes the events may have been on leave, retrospective QC may be performed. If research relies on the quality of an earthquake catalog, it may be beneficial to reach out to the data providers to ensure that the catalog is being interpreted correctly. For example, a common mistake is unintentionally mixing magnitude types because a catalog may default report different preferred magnitudes based on an event size. You never know if more attention was spent on cataloging earthquakes surrounding significant events because of special interest.
On an aside, when I read a paper focused on earthquake prediction, there are two things I key into. How many events was the method applied to? We have a robust global earthquake catalog, so my expectation is that a significant approach could be observed on a wide range of earthquakes. 2) How much attention was spent looking at the false positive rate. An EQ prediction is only good if we don't cry wolf.
Hi Will, thanks for the comments! It helps a lot to know some of the reasons why earthquake catalogs can be updated over time, even long after the earthquakes actually happened. We have often started writing about an earthquake, only to find out that some aspect like the origin location or the focal mechanism has been updated in the mean time, sometimes changing our interpretation entirely. It's just a natural part of the process. Is there a public-facing article somewhere about the dynamic nature of ComCat? I'm not sure what proportion of people who download data are really aware that events are often updated, even quite old events. Cheers - Kyle.
This was a great read - a few points stuck out to me, not necessarily particular to this study.
I applaud the authors for making their data and code available. This is the direction that open seismology should be going. This is particularly important when using a real-time earthquake catalog such as the ANSS ComCat catalog.
As you mentioned, ComCat is updated and curated continually, contains event information from many different sources (e.g., regional networks and the USGS), and is, therefore, dynamic. Maybe in the future, there can be better versioning of this catalog, but as it stands, including the earthquake catalog underlying the research is an important step for reproducibility.
This brings up another point: earthquake catalogs (estimates of magnitude and location) can be dynamic for many reasons. A set of seismic stations could go down, changing the network configuration, the underlying algorithms to compute locations and magnitudes may change, the normal duty seismologist who processes the events may have been on leave, retrospective QC may be performed. If research relies on the quality of an earthquake catalog, it may be beneficial to reach out to the data providers to ensure that the catalog is being interpreted correctly. For example, a common mistake is unintentionally mixing magnitude types because a catalog may default report different preferred magnitudes based on an event size. You never know if more attention was spent on cataloging earthquakes surrounding significant events because of special interest.
On an aside, when I read a paper focused on earthquake prediction, there are two things I key into. How many events was the method applied to? We have a robust global earthquake catalog, so my expectation is that a significant approach could be observed on a wide range of earthquakes. 2) How much attention was spent looking at the false positive rate. An EQ prediction is only good if we don't cry wolf.
Hi Will, thanks for the comments! It helps a lot to know some of the reasons why earthquake catalogs can be updated over time, even long after the earthquakes actually happened. We have often started writing about an earthquake, only to find out that some aspect like the origin location or the focal mechanism has been updated in the mean time, sometimes changing our interpretation entirely. It's just a natural part of the process. Is there a public-facing article somewhere about the dynamic nature of ComCat? I'm not sure what proportion of people who download data are really aware that events are often updated, even quite old events. Cheers - Kyle.
we discussed a similar topic in a recent paper
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2023JB027337