At the focus sessions about computational physics at the APS March Meeting in Dallas, challenges for the computational physics (or computational science in general) community have been identified. Probably the most important issue is to further develop education in computer related sciences and to make it standard in undergraduate curricula.
Then there were more technical things such as algorithm design having to pay more attention to keep parallel computing in mind, and making programs robust against data loss (e.g. failure of a subset of computing nodes).
Further issues came up in the discussions after the talks, in my opinion all more or less related to each other. It still seems to be necessary to advocate the relevance of computational physics. While this is not a problem in condensed matter physics (in my experience at least), this was reported to be different for parts of the atomic physics community.
Another important issue was addressed towards the end of these sessions: should a researcher doing computational physics research be able to write the code on her/his own? Or would it (even at the level of a principle investigator) be enough to be able to use a code? I think that there should be a balance between writing codes and actually producing scientific output. I think it is optimal if the PI knows how the codes work so she/he could supervise interested students in implementing/extending codes. At the student level just using codes might in some cases be enough, but ideally, I think, there should be people able to work on the codes in each group as well. The hopefully resulting collaboration within these groups would help avoiding computational physics codes misconceptionally be regarded as black boxes, and it could on the other hand also reduce the problems of programmers playing around with their codes.
In my opinion it would be a problem for the computational physics community, if the usage and development of codes would be separated further. The groups simply using codes will be the more successful ones (due to higher publication output), but chances of education on how things are calculated and in some sense even on what is calculated are missed (necessarily reducing the quality of the research). Groups mainly focusing on code development will miss out on publications (unless the development of codes would be given more importance and even be considered as scientific output, i.e. there would be more journals like Computer Physics Communications - how about a physical society-based journal?). And their students might lack education on how to do scientific work or communicate results. I thus believe that, besides asking experts from e.g. applied mathematics for help, keeping or bringing back together usage and development of codes will be of benefit for both parts.
In the preprint "Modified string method for finding minimum energy path" (arXiv:1009.5612), Amit Samanta and Weinan E describe a method for finding so-called minimum energy paths (MEP). These paths are the ones with most statistical weight wrt. a transition in configuration space. This could e.g. be a diffusional process, which is shown below for proton conduction in SrTiO3 (Sr, Ti, O and H depicted as green, gray, red and white spheres, respectively; the potential energy surface has been calculated using the Quantum Espresso package), see the preprint for other examples.
I have compared their presented modification of the string method to the nudged elastic band method (NEB). Both methods sample the path using a finite number of images. An initial guess for the path is optimized by gradient following. The two methods differ in how the sliding down of intermediate images from barriers is prevented. For the NEB, this is achieved by introducing virtual spring forces, keeping the images apart. For the string method, the path is iteratively re-parametrized such that the path is evenly sampled. Both methods perform similarly well. Below are shown the potential energies for MEPs obtained using the string and NEB methods for the above proton diffusion path in SrTiO3, respectively:
The residual gradients in the initial and final configurations seem to be a little too large, therefore the NEB path shows the tendency to have minima away from these configurations. The string method, which is implemented here to fulfill Eq. 6 of the preprint at each optimization step, is more stable against this problem (in the case of the NEB, intermediate configurations at ~0.5 and ~4 Å should be relaxed separately as new boundary configurations). What is interesting is that the new string method shows a slightly better convergence of the MEP (here optimized using Broyden's method with rank one Quasi-Newton updates):
Instead of plain re-parametrization at each step, the more sophisticated schemes outlined in the preprint might even yield better convergence. Despite the simplicity of what was implemented here, this seems to be basically as good as the NEB method.