Home
People
Research
Publications
Tutorials
Langevin Dynamics
Dirac Delta Function
Nose Dynamics
LocalSCF Theory
Soft Matter
Contact Us
Site Map

Quantum Mechanical Approach to Simulation of Soft Matter

by Victor Anisimov

1.1 Introduction

Molecular modeling is an actively progressing field of science which deals with theoretical prediction of physical properties of materials based on the established mathematical relationship between the elementary building blocks of materials at atomic level. Developing the necessary understanding of these mathematical functions and finding the ways to improve them serves the important goal of merging theory with experiment. The utility of molecular modeling in chemistry and biology stems from providing atomic-level details not available from experimental studies. Based on the theoretically derived insights, simulations help understanding experimental data, and through that assist in designing new experiments and materials.

Starting from the early days of its infancy molecular modeling outgrew the research problems limited to simple systems like hydrogen and helium atoms, reached the level of small molecules, and eventually entered the domain of macromolecules encountering tens and hundreds of thousands of atoms. Nowadays molecular modeling is exploring the turf of multimillion atoms’ systems. The complexity of the studied problems progressed from computing atomic spectra to the prediction of molecular structure of small molecules, to chemical reactions, and eventually to the studies of free energy driven processes in biology. As an additional outcome, this evolution resulted in a better understanding of the role of molecular modeling in chemistry and biology. From the naïve ideas that wet-lab experiments can be replaced by the virtual computational laboratories, molecular modeling matured to specifically seek for new physical insights about chemical or biological systems which are difficult or impossible to come up by using the existing experimental techniques. This transition has been promoted by entering the domain of large systems due to the increasing power of computer hardware and algorithmic improvements. Indeed, there is a rich experimental arsenal of tool for the studies of small molecules in their different physical states, whereas the variety of experimental methods and the detail of their report quickly vanish when going to materials of chemical and biological significance. Therefore molecular modeling is often the only theoretical tool which can provide insights on structure-property relationship of complex materials where the present experimental methods have limited reach. The change in the mind of molecular modelers to start searching for new physical insights from the simulations can also be simultaneously viewed as the cause and effect of the methodological effort to design new simulation techniques that can specifically reveal such insights. This evolution suggests that further progress in molecular modeling is firmly linked to the development of new advanced computational methods.

The success of theoretical simulations of materials, which are collectively known in chemistry and biology as soft matter, vitally depends on properly addressing their specific structural aspects. For the simulation to match the laboratory conditions the simulated system must contain tens of thousands of atoms and have the configuration space rigorously sampled. Thus the theoretical method should be able to deal with the systems of very large size, and do it in an efficient manner. Additional requirement is to provide experimentally relevant level of accuracy, which is typically in the range of 1 kcal/mol. These two requirements of high speed and accuracy combined represent a daunting challenge to the simulation methods. Unlike the situation dealing with a small molecule where speed and accuracy can be traded one for another in order to better address the specific goals, it appears that such trade off is of limited utility with very large systems. Here simultaneously addressing both speed and accuracy is an absolute requirement in order for the simulation method to be of practical use in material design.

In the light of the above requirements, the techniques to significantly speed up the calculations and to simultaneously reach an increased accuracy are at the heart of the modern molecular modeling theory. Once we accepted that speed and accuracy cannot be traded one for another on large systems a logical conclusion arrives to try to trade off some other aspects of simulation, e.g. generality. For instance, if there is an opportunity to limit the applicability of the simulation method to a somewhat limited but practically very important application domain while providing high speed and accuracy on the relevant calculations then the solution is obtained. This is how classical force fields came to the light and changed the way of our thinking, albeit not entirely without a fair amount of good luck often accompanying fundamental discoveries.

Another practical approach on how to effectively apply the simulation methods to material design emerged as a counter response to the brute-force approach, which attempts to predict every aspect of chemical and biological systems from simulation. It is indeed aesthetically very pleasing to define the Shrödinger equation or some sort of potential function for a chemical or biological system, push the button “run”, and then to read the prediction from the dials. It might be indeed an attractive approach provided the right prediction will come in the reasonable amount of time while we are still interested in the solution. Unfortunately the answers for simple systems are already known so it makes no sense to run such calculations whereas approaching the practically interesting problems is beyond the reach of the modern technology. The impracticality of the brute-force approach suggests exploring the alternative venues e.g. looking for possible shortcuts. In the latter strategy the system is prepared and sampled in the vicinity of  such point of the configuration space which holds the sought answer, thus to minimize the amount of necessary calculations. Though, there is no a universal recipe where to look and how to look thus the best we can do is to guess. The necessary intuition comes with practice and experience.

Besides of the computational bottleneck and the configuration space explosion, there are other reasons leading the brute-force approach to failure. For instance, the errors due to the insufficient accuracy of the simulation method can add up and eventually destabilize the simulation. On the other hand, the insufficiently accurate method still can be useful if it is applied in the right place, particularly in the case of error cancelation. This is what makes molecular modeling both difficult and exciting area. Moving steadily on from one successful step to another and gradually building up the experience in the application domain and methodology are the present reality and the future of molecular modeling as a rapidly evolving branch of science on the interface of theory and experiment.

When weighing the possible directions for new methodological developments in the theory of molecular modeling of materials the central theme is to guarantee a sustained progress in accuracy improvement of computer simulations. As a starting point we have mature classical force fields which do an amazing work. What’s next? A significant improvement may come from extending the force fields by implementing electronic polarizability in the classical mechanics framework. While this work is still in progress, and possibly with the anticipated outcome, we may look a little further and ask ourselves what is the long-term perspective of computational methods in chemistry and biology? What are the factors which limit the accuracy of simulations and which ones should be immediately addressed? Since quantum mechanics provides the fundamental theory of matter, moving the simulation platform from classical force fields toward quantum mechanical platform seems a logical choice. This can be achieved by upgrading the classical system Hamiltonian to resemble the quantum mechanical one while retaining the wealth of the methodological advantages brought into the field by classical force fields. Such mathematical theory of chemistry and biology as the branches of material science is yet to be developed. It is surely going to be a rewarding experience, yet full of a challenging work due to the general complexity of quantum mechanics and the related algorithms. This work serves the purpose of making the advanced theory affordable for the wide range of life scientists and students having only a basic mathematical background in integral and differential calculus. The presented material provides the necessary background to stimulate and further promote the new theoretical developments in chemistry and biology.

1.2 The Blueprint of Molecular Modeling

Molecular modeling traces its roots to the early 1920s which marked the dawn of quantum mechanics. It was the time when, based on the account of Robert S. Mulliken, all the integrals which appear in quantum mechanical equations had to be solved manually using pencil and paper. Not surprisingly, only simple molecules like water, ammonia, formaldehyde, etc. could be treated in such elaborate way. The rise of digital computers greatly simplified the elaborate calculations aiding eventually the molecular modeling to become a separate interdisciplinary branch of science pursuing the understanding of chemical and biological phenomena based on theoretical physics principles. However, first significant breakthrough in using computer modeling to solve practical problems came not from chemistry but mechanical engineering. In 1990s, Boeing engineers performed first 100% computer-based design of the entire Boeing-777 airplane without the costly and time-consuming investment in building and running physical prototypes. Every component of the aircraft was simulated and tested on work and interference with other many thousands of parts in a completely assembled virtual airplane. The Boeing-777 design was launched in 1990 and five years later first airplane left the assembly line and assumed its business operation as a cargo carrier.

The degree of maturity of computer modeling technology in mechanical engineering provides an exemplary level to match for other computationally intensive fields. The goal of computer simulation in chemistry and biology is to design new materials and pharmaceuticals. It is not feasible yet to design new materials by computational methods alone. Laboratory experiments still remain the primary driving force in these fields. Yet as it has been shown by mechanical engineering the ice is broken, computer modeling can indeed fairly well represent the reality. The only serious obstacle on this way is our presently limited understanding of chemical and biological processes. Since any quantitative theory of nature developed or to be created is rightfully called physics it is natural that chemistry and biology turn to physics in order to supply them with the necessary theoretical basis.

1.3 The Relationship between Chemistry (Biology) and Physics. Dirac’s Paradox

Among natural sciences which include physics, chemistry, biology, etc., physics has been charged with a particularly challenging role of developing a quantitative theory of nature. This goal traces its roots to the Democritus’ idea of atoms, according to which the macrocosm is viewed as composed of tiny invisible particles whose various composition is responsible for the very unique properties of materials. Establishing a mathematical relationship between the building blocks of matter is the undisputable domain of physics.

Unlike physics, chemistry and biology put their effort in discerning the empirical rules for producing new materials with useful properties. Since the purpose of their studies is also on understanding the nature, albeit from a different perspective, a question arises whether the difference between chemistry and biology on one side and physics on the other is something important to understand for the future progress of science or rather represents a superficial matter? What do we indeed know about nature?

According to physicist’s point of view, there are universal laws of reality and one should be able to predict the emergence and the characteristics of chemical and biological systems from these very fundamental laws (A. Sergi, Quantum biology, DOI: 10.1478/C1C0901001). This approach is best expressed in the famous statement made by Paul A. M. Dirac, who shared a Nobel Prize in physics with Erwin Shrödinger for the discovery of quantum theory of matter: “The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact applications of these laws leads to equations too much complicated to be soluble. It therefore becomes desirable that approximate practical methods of applying quantum mechanics should be developed, which can lead to an explanation of the main features of complex atomic systems without too much computation.” (P.A.M. Dirac, Proc. Roy. Soc. London Ser A, 1929, 123 (792) 714-733). The subsequent progress in theoretical and experimental physics has further reinforced this view by demonstrating a remarkable agreement between theory and experiment in explaining the phenomena ranging from the sub-atomic level to the level of macrocosm thus supporting the statement that physics has indeed reached the fundamental understanding of nature. Since chemistry and biology are parts of nature, it is tempting to extend the success of physics to chemistry and biology and to declare the existence of a theoretical solution to their research problems.

This opinion faces a criticism from practical point of view. Contrarily to physics, chemistry and biology have evolved as phenomenological branches of science, which are completely independent of physics. They deal with systematization of experimental facts thus avoiding the need in the underlying fundamental mathematical theory. The alienation from physics can be traced in the steady decline in the interest of the leading chemistry and biology journals to publish molecular modeling studies along with experimental papers. This suggests, that contrarily to the physicists’ point of view, the modern state of the art in physics does not provide a sufficient insight to chemistry and biology in assisting them with their main job of discovering new materials. Logically this should not happen if physics would indeed possess c complete understanding of chemistry (and biology) as it claims. This returns us back to the starting point. What if the link between physics and chemistry (and biology) is not yet sufficiently understood? If such, then resolving this issue might indeed stimulate the development of quantitative theory of chemistry and biology.

Beneficial for clarifying the relationship between physics and chemistry is to review the origin of their disagreement. According to the physicist’s point of view, physics has reached a complete understanding of individual molecules and thus presumably of chemistry. However chemistry has never been about molecules. The latter is the domain of molecular physics. Similarly, the progress of physics in understanding quarks and leptons which also compose materials is of no practical use in chemistry. Further on, going from one molecule to million of molecules is not going to transform molecular physics in chemistry either. Chemistry works with continuous matter and seeks the ways to develop new materials with designed properties. Therefore, according to chemist’s point of view, physics does not know anything about chemistry but confuses it with molecular physics. The problem is that molecular physics does not provide the knowledge how to develop new materials and neither statistical mechanics. Till present, the connection of microscopic events with the rational design of macroscopic properties remains essentially unresolved. Thus the origin of disagreement between physicists and chemists on the progress of chemical theory seems in different meaning these two groups put in definition of chemistry. It would be rather provocative to presume that physicists’ understanding of chemistry better reflects the reality than that of chemists.

It is certainly undisputable that physics possesses a complete knowledge of atomic-level processes which provides a starting point for development of chemical and biological theories. There is also a valid point in considering chemistry (and biology) as a part of physics, since developing a quantitative theory of matter is the domain of physics. However, it does not imply that molecular physics is equivalent to quantitative theory of chemistry. Materials are indeed composed of molecules, but there is a void in between the knowledge of individual molecules and discerning from that the physicochemical properties of materials. Conventionally the task to provide the link between microscopic and macroscopic worlds is assigned to statistical mechanics. The combination of molecular physics with statistical mechanics is thought at least in theory to provide solution to any problem in chemistry. However, what if such solution can never be implemented? Would the theory still be considered complete?

According to the opinion of Dirac, which is shared by the majority of physicists, solving chemistry problems is only the matter of solution of complex mathematical equations. Since these are too elaborate equations to be solved at present, could the difficulty in solving these equations be a sign of something being overlooked. Obviously, the existence of computational limitations at the present technological level does not indicate a fault in the theory. Still it would be reasonable to expect at lest a proof that such solution potentially exists in the domain of reality. Thus the physicists’ understanding of chemistry phenomenon could be tested against the feasibility to computationally predict an outcome of a chemical experiment conducted in a smallest chemical reactor representing a macroscopic system.

It is interesting to find how powerful a computer must be in order to conduct the necessary simulations. A chemical reactor holding 1 μL of water roughly contains 1020 molecules. Considering an extremely idealized case that energy computation of such system requires 1020 floating point operations, i.e. one floating operation per molecule, a molecular dynamics simulation of such system for 1 second using 1 fs (10-15) integration time step will require performing 1035 floating point operations in total. An idealized supercomputer built out of one million of the fastest Nvidia Tesla graphical processor cards each capable of 1012 FLOPS (FLoating point Operations Per Second) will have an unimaginable performance of 1018 FLOPS. This supercomputer will finish the above simulation in 1017 seconds, which is the age of our Universe. Meanwhile, the chemical process in the reactor will typically finish in the reactor in just one second and the result can be analyzed without the need to do any computations.

Perhaps the computational barrier we face in the above example is only a matter of the present technological level assuming that future progress in computer technology will soon or later remove this limitation. It is interesting to see how further we need to improve our already highly idealized supercomputer in order to make the above computations feasible. Assuming the Moore’s law is going to permanently hold so we will be infinitely able to double the density of the integrated circuits and the computational power along with it every two years we will need to replace the current micrometer-size (10-6 m) circuits by 10-23-meter ones, which is million times smaller than the size of proton. Alternatively, current gigahertz (109 Hz) CPU clock speed should be replaced by 1026 Hz circuitry, with the wavelength being smaller than the size of atom and million times shorter than gamma rays. None of the options which require assembling subnuclear-size circuits or gamma-ray frequency operated CPUs can be implemented without violation of the fundamental laws of physics. Besides, whatever progress in computer hardware will be achieved it will be immediately wiped out by upgrading the extremely oversimplified computational model to a realistic one even remotely approaching quantum mechanical level. This analysis suggests that brute-force numerical computations using either conventional of futuristic quantum computers is never going to replace the experiment in chemistry both in technical feasibility and in cost-to-performance ratio aspects.

In performing such analysis one of course should assume the limited ability of human mind to predict the future based on the current level of knowledge. Still, there is another aspect of the analysis which is unaffected by such limitation. In nature the information about potential travels from any corner of the system to other one with speed of light thus every atom instantaneously senses what is happening with other atoms, whereas computation is an inherently serial process of collecting the signal and returning the response best at the frequency of CPU clock for each atom in the system sequentially. Although it happens with the blazing speed in comparison to the speed which can be sensed by human brain, the computation is no match to atomic processes.

Here we come to the realization of important difference in computer modeling when comparing chemistry with mechanical engineering. Unlike computational mechanical engineering which replaces the slow physical prototype building and testing processes (taking  weeks, months, years), in computational chemistry one tries to model the instantaneously fast atomic processes with inherently slow computations. Therefore, it turns out that computation is a very inefficient way to design new materials. It is clear now that in the face of chemistry and biology nature found a much better device to design new materials than the modern physics can offer. How nature does that holds the answer to the real quantitative theory of chemistry and biology. This constitutes a logical contradiction between physics’ claim of possessing a mathematical theory of matter and impossibility of using this theory to predict physical properties of matter without violation of the fundamental laws of physics. This is the Dirac’s paradox. Although we understand that the theory is correct the above contradiction would not take place if the theory of chemistry would indeed be complete. This is the factor inspiring continuous efforts to develop the quantitative theory of chemistry. As it appears, the current state of the art in physics provides the necessary ground but insufficient solution to chemistry and biology.

There have been other examples in history of science when having a valid knowledge did not guarantee its practical utility outside of the original application domain. A typical example is Archimedes’ lever. The discovery of lever marked a tremendous progress in mechanical engineering. Inspired by the versatility of his device, Archimedes exclaimed “Give me a lever long enough and a fulcrum on which to place it on and I shall move the world.” It is clear from the current perspective that moving the Earth from its orbit would require a different theory. This does not mean that the equation of lever is wrong but rather its applicability beyond the original domain is limited. Similar situation takes place in the Dirac’s paradox when the theory developed for molecular systems is applied to continuous matter. Obviously, quantum mechanics offers important insights to chemistry phenomena related to individual molecules. It also provides a framework and a reference point to calibrate the approximate methods. Despite quantum mechanics offers a fundamental theory of matter, its straightforward application to prediction of physicochemical properties of chemical materials leads to explosive growth of considered degrees of freedom and computation time. Finding the proper mechanisms to reduce the dimensionality of the problem and to integrate out the unnecessary degrees of freedom into parameters is necessary in order to develop a quantitative theory applicable to material design.

Looking back into the origin of overly optimistic conclusions appearing from time to time in the history of science, it is apparent that it is natural for an inspired human mind to jump ahead of the reality and to prematurely declare that everything is already known. Archimedes made such mistake after the enormous success of the discovery of lever; Dirac followed the similar path after the revolutionary discovery of quantum mechanics, coming to the premature conclusion that chemistry is fully understood by physicists. A bit earlier in 1900s physicists also believed that they knew everything about the nature. The discovery of quantum of energy by Max Plank, which revolutionized the entire science, gives compelling evidence that learning the miracles of nature is a never ending process.

Even now when the accumulated knowledge exceeds the capability of a single person to grasp the boundaries of molecular modeling, there is still a lot of work to do in physics. Since chemistry and biology are phenomenological disciplines they turn to physics in order to supply them with the theoretical concepts. This part of physics dealing with the design of new materials is yet to be discovered. This goal may not be achieved in a feasible time, yet a basic recipe is already well known and relates to reducing unnecessary degrees of freedom of the system. For instance, electron-electron interactions happen via exchange of photons, as it is described by quantum electrodynamics on the most fundamental level; however a large part of physics is well described with the less fundamental but still very accurate Coulomb law. The direction of simplification of theory is the necessary approach to continue closing the gap between microscopic and macroscopic processes and to find new quantitative forms of theory of matter to better suit the needs of chemical and biological research.

1.4 Understanding of Soft Matter

Unlike problems studied by molecular physics and pertaining to single molecules or clusters of molecules in gas-phase, chemistry and biology deal with cooperative effects created by many millions of atoms constituting condensed phase. It would be tempting to conclude that the difference between gas-phase and material is only in the number of particles but in reality it represents only a tip of the iceberg. The forces and interactions which either do not exist or could be safely neglected or cut off on small systems become non-negligible or even critically important in the systems representing condensed phase. It means these two classes of the systems need different theoretical treatment. Another unexpected problem is exponentially raising cost of sampling of conformation space of the system with increasing number of atoms. If in the case of small molecules they can be reasonably treated either as rigid or in their easily found global energy minimum conformation the macromolecular systems have so many local energy minima closely positioned one to another, so the notion of global energy minimum becomes practically useless. In condensed phase no single conformation is solely representative of physical properties of the system, and the latter can be obtained only as ensemble average. Another key difference between gas-phase and condensed phase systems is in the character of energy simulations we should be concerned with. In small molecule systems the knowledge of potential energy is mostly sufficient in order to characterize the system. This notion produced very fruitful concepts like reaction barrier, intrinsic reaction coordinate, barrier to rotation, etc. Unlike the case of small molecules, the processes in condensed phase are driven by free energy change, which requires the knowledge about full configuration space of the system. Thus the knowledge of potential energy of the system is no longer sufficient in order to characterize the system, and the explicit consideration of entropic term is required together with the associated computational difficulties it brings in.

Materials considered in chemical and biological simulations are collectively known as soft matter. This definition includes liquids, gels, biopolymers, oligomers, etc., which do not exhibit explicit periodicity in their structure, and where entropic term is responsible for disorder in the structure of the system. It makes soft matter different from solid state where atoms essentially stay in the same place and the existence of periodicity can be effectively utilized to simplify computations. In soft matter, particles in the system have considerable freedom to travel within the system. In liquids, such mobility results in transport of matter in the form of diffusion. In biopolymers, the mobility of particles results in dynamically changing 3D structure. Such systems are computationally much more challenging than solids.

When choosing a proper level of theory, particle mobility and structure disorder introduce additional factor making computations of soft matter more difficult than it is in the case of gas-phase systems and solids. In gas-phase one does not need to worry about reproducing experimentally determined volume, density, and pressure in the system, so the researcher is free to use any level of theory that is suitable for the problem at question. Choosing the level of theory is also not a problem with solids when their particles are restrained or constrained to their experimentally determined coordinates, usually from X-ray. Thus realistic computer simulation of such systems can be performed having a considerable flexibility in choosing the level of theory. No such freedom exists when dealing with soft matter. If the accuracy of the selected level of theory is insufficient no volume, density, and pressure in the system will be correctly reproduced and the simulation will be meaningless.

Correctly accounting for long-range forces in soft matter simulations places a very strong requirement to the choice of level of theory. Soft matter is essentially less tolerant to the deficiencies in the level of theory than gas-phase and solid state systems. It means that upon going from small molecules in gas-phase to soft matter the added complexity in simulation increases not only from the increased number of particles in the system but also from the requirement of greater accuracy. At presence the requirement of high speed and accuracy can only be achieved via introducing adjustable parameters to the simulation theory.

Introducing parameters inevitably reduces the range of applicability of the method to that of the trained properties. For the simulation method to be applicable to material design in chemistry and biology the choice of parameters is dictated by reproducing experimentally measurable physical properties of materials. These include liquid density, heat of vaporization, free energy of hydration, etc., which provide the opportunity to fine-tune the balance of short-range vs. long-range forces necessary for internal consistency of the method. Such protocol was first successfully introduced on classical force fields; and it is the present de facto standard in material science.

1.5 The Choice of Level of Theory

For many decades after the discovery of quantum mechanics physicists stood on the firm belief that the theoretical insight into all chemistry and biology phenomena can be reached by seeking the solution of Schrödinger equation. This was continued by active development of computational methods of molecular physics known as quantum chemistry to conduct quantum mechanical study of physical properties of individual molecules. Since it was soon realized that the properties of chemical and particularly biological materials cannot be discerned from the study of individual molecules the need for further methodological developments became apparent.

The advent of classical force fields brought an immense progress in linking molecular theory with chemical and biological experiments. Central was the discovery of significance of non-bonding interaction for the prediction of physical properties of soft matter. After that further effort was primarily concentrated on improving the description of these interactions. In the language of physics, classical force fields rely on the mean field approximation as their theoretical basis. According to the latter, the constant part of the potential field acting on the particles in the molecule, can be integrated out and represented via parameters, thus removing the unnecessary degrees of freedom from the system.

Despite of significant progress achieved in computational biophysics due to the advent of classical force fields there are growing concerns about limited accuracy of the classical approach. For instance, the inability of a single set of parameters to equally well suit the hydrophobic and hydrophilic environments has been linked to the importance of electronic polarization. Other related examples are systematic overestimation of predicted protein-ligand binding free energy when using MM-PBSA (Molecular Mechanics based Poisson-Boltzmann implicit solvent model with surface area treatment of non-polar term) approach, limited accuracy of scoring functions and that of automatically generated ligand parameters. The assumption of insignificance of charge transfer, which force fields are based upon, has been presently challenged by quantum mechanical calculations. Although the effect is small it might be the factor determining present accuracy limits of the force fields. Due to the abundance of ionized species in nature, charge transfer may appear as a predominant way of polarization of large and highly ionized biological systems.

The modern computational platform for soft matter simulations, besides to be accurate to suit the present needs, has also to provide the conditions for sustained accuracy improvement. One of such directions could be incorporating electronic polarizability at classical force field level. Although this work shows a promising potential, it is still in the state of proof of concept after two decades of the intensive work. As it often happens with research problems, finding a solution to ensure sustained accuracy improvement turns out to be more challenging problem than it is anticipated. Thus determining the most effective ways to reach better accuracy in material simulation is still an open problem. Continuing the line of thought brought in by the polarizable force fields, a logical approach to accuracy improvement would be incorporating even greater level of physics into the simulation methodology. On this path it is reasonable to turn to quantum mechanics, which is a fundamental theory of matter, for additional insights. First-principle quantum mechanical methods with their sophisticated mathematical complexity are out of question in this search. Only those methods which are comparable in performance to classical force fields could be of practical interest. Among those, semiempirical quantum mechanical methods are the closest in the spirit to classical force fields. They are the fastest among the quantum mechanical methods and their accuracy can be tuned to particular application in the same way it was done with force fields via optimized parameters. Optimization of semiempirical parameters using the principles developed in classical force fields guarantee the necessary level of accuracy. In the end, such optimization made classical mechanics suitable to describe atomic-level processes, thus reaching the same level in accuracy and going beyond that is guaranteed in the quantum mechanical framework, which is the native theory for atomic-level processes. Since quantum mechanics provides a rich arsenal of mathematical methods to further improve accuracy, e.g. using larger basis sets, split-valence basis sets, etc. the necessary level of accuracy could be added on demand ensuring the sustained accuracy improvement of the theory.

As the reader might have noticed we never used the argument that quantum mechanical effects are important in chemistry and biology. There is much confusion on this subject including speculations that quantum mechanics is not important in biology. To clarify this issue, we must agree first on the topic that molecular modeling is important part of chemical and biological research. In the moment of truth, whether the computational prediction is obtained using classical of quantum mechanical methods is of least significance. Therefore the original question about significance or non-significance of quantum methods in biology is entirely superficial. Once the simulation is made feasible the only aspect which matters is how accurate the theoretical prediction is. If the necessary level of accuracy is obtained by using classical mechanics methods then there is no need to go to the more expensive quantum mechanical methods. However, there is at present a strong demand to increase the accuracy of classical force fields. Among the strongest voices is pharmaceutical industry pointing to methodological vacuum in the existing methods of computer aided drug design which fall far behind the necessary accuracy level. It could be certainly possible to add more parameters to classical force fields and get the necessary accuracy improvement by further narrowing down its application area. But parameters cannot be developed for every potential drug compounds in the infinite chemical space. Thus the present methodological level cannot be considered anywhere satisfactory, and the use of quantum mechanical methods in biology is motivated by the need to augment the use of classical force fields in those areas where the application of force fields is less efficient than that of quantum mechanical methods.

To completely close the issue of misconception of quantum mechanical effects in biology, it is necessary to address one more popular line of thought. According to a frequently voiced opinion there is no such field as quantum biology because all quantum effects, e.g. light absorption, electron transfer, etc. are in the domain of quantum chemistry. In the light of our previous discussion, it would be fully correct to say that there are no quantum mechanical effects in chemistry either, since all such effects are happening at single molecule level, which is the domain of molecular physics. Since there is no doubt that quantum mechanical effects are significant in chemistry, it shows that the argument about non-existence of quantum biology is a logical misconception. Same as with quantum chemistry the name quantum biology first of all means a useful application of quantum mechanical methods in the respective area of life science. In reality is not possible to separate the effects on classical and quantum mechanical ones when discussing about atomic-level processes. Attempts to enforce such distinction are utterly non-physical. In the light of above said the field quantum biology is introduced as the application of quantum mechanical methods to biological materials to contribute important knowledge about their underlying atomic-level processes. Same should be said about quantum chemistry, that it deals with application of quantum mechanical methods to the design of chemical materials, however by historical reasons the definition of quantum chemistry is tightly glued to single molecules. The material-design-based definition of quantum chemistry will eventually prevail as that of quantum biology after accumulating the increasing evidence of feasibility and utility of application of quantum mechanical methods in material design.

 

Copyright (c) 2011 Victor Anisimov