Sunday, March 27, 2011

March Meeting: where is computational physics going?

At the focus sessions about computational physics at the APS March Meeting in Dallas, challenges for the computational physics (or computational science in general) community have been identified. Probably the most important issue is to further develop education in computer related sciences and to make it standard in undergraduate curricula.
Then there were more technical things such as algorithm design having to pay more attention to keep parallel computing in mind, and making programs robust against data loss (e.g. failure of a subset of computing nodes).
Further issues came up in the discussions after the talks, in my opinion all more or less related to each other. It still seems to be necessary to advocate the relevance of computational physics. While this is not a problem in condensed matter physics (in my experience at least), this was reported to be different for parts of the atomic physics community.
Another important issue was addressed towards the end of these sessions: should a researcher doing computational physics research be able to write the code on her/his own? Or would it (even at the level of a principle investigator) be enough to be able to use a code? I think that there should be a balance between writing codes and actually producing scientific output. I think it is optimal if the PI knows how the codes work so she/he could supervise interested students in implementing/extending codes. At the student level just using codes might in some cases be enough, but ideally, I think, there should be people able to work on the codes in each group as well. The hopefully resulting collaboration within these groups would help avoiding computational physics codes misconceptionally be regarded as black boxes, and it could on the other hand also reduce the problems of programmers playing around with their codes.
In my opinion it would be a problem for the computational physics community, if the usage and development of codes would be separated further. The groups simply using codes will be the more successful ones (due to higher publication output), but chances of education on how things are calculated and in some sense even on what is calculated are missed (necessarily reducing the quality of the research). Groups mainly focusing on code development will miss out on publications (unless the development of codes would be given more importance and even be considered as scientific output, i.e. there would be more journals like Computer Physics Communications - how about a physical society-based journal?). And their students might lack education on how to do scientific work or communicate results. I thus believe that, besides asking experts from e.g. applied mathematics for help, keeping or bringing back together usage and development of codes will be of benefit for both parts.

2 comments:

  1. I don't mind other people writing codes that I "barely" use. For my master's project, I had been programming 90% of the time on a qmc code for spin glasses and had very few results. During my PhD studies I've been using a commercial DFT code and was able to publish a lot.

    ReplyDelete
  2. Setting up or constructing a new instrument generally requires a significant amount of time, too (probably without publishable output during that period). And then again, that's often worth the effort.
    As new students will continue to work with this equipment, they will not have to start from scratch, but they might continuously improve it and add further knobs and analyzers.
    In computational science, and in particular in a group that both does code development and also uses it to do science, new students can benefit from what is already there and start producing results early, still being able to add functionality to the code (which particularly in the case of open source is of benefit to everyone) and learn something about the basics their results are relying on.

    ReplyDelete