This post is a translation of a post that appeared on my Swedish blog in May 2019.
The other week I read an optimistic blog post on the subject of machine learning by the American skeptic and neurologist Steven Novella. He wrote, among other things, about an American research group that has trained a neural network to determine properties of doped graphene, that is graphene where some of the carbon atoms are replaced with other elements, from the placement of the dopant atoms. Novella chose to portray this as the neural network being able to perform decades of research in the course of a few days, and hinted that this could give us practical applications of graphene considerably earlier than if no machine learning had been used.
As someone who is interested in both graphene and machine learning, I obviously had to find the scientific paper the group had published and try to figure out what they had actually done.
The research question
The paper in question is published in npj Computational Materials (it is also open access, by the way) and according to the title it deals with the prediction of the so-called band gap of materials that are a combination of graphene and boron nitride. Boron nitride is a material that consists of two types of atoms, boron and nitrogen, arranged in a hexagonal lattice just like the carbon atoms in graphene. Also just like graphene, boron nitride can be produced as just a single, super-thin layer of atoms. These similarities between the two materials are a part of the reason why people try to combine them.
Another part of the reason is that while graphene has excellent electrical conductivity, it is very difficult to get boron nitride to conduct electricity at all. This difference is due to that it requires fairly little energy to get the electrons in graphene moving, while the electrons in boron nitride need a lot of extra energy to get to a state where they are mobile. This energy boost that is needed for the electrons to be able to move is also a measure of the band gap (corresponding to a gap in energy between different states that the electrons can be in). Graphene thus has an extremely small band gap, boron nitride has a large bandgap. By combining the two materials people want to create a hybrid material with a band gap of a size that is useful for e.g. applications in electronics.
However, it turns out that you cannot just replace a few carbon atoms with boron and nitrogen. How the boron and nitrogen atoms are arranged in relation to each other matters for how large the band gap of the resulting material turns out to be. What the American research group has done is try to predict the size of the bandgap based on the placement of boron and nitrogen atoms using artificial neural networks, more specifically so-called CNNs or convolutional neural networks.
The neural networks
CNNs are a type of neural networks that have been developed to pick out characteristic features from images and then classify the images based on the features - they are for example useful for facial recognition and when self-driving cars need to tell the difference between a pedestrian and a road sign. The basic principle of a CNN is similar to comparing small regions of a picture with smaller, simpler images and give a positive response if they are similar. If for example you have a picture of a house and the smaller image has a vertical line you might get a positive response when you get to the corners, doors or windows since their depictions contain straight, vertical sections. In CNN, however, you have to represent both images as matrices of numbers, and you also have several layers where the result of one comparison to a smaller image in turn is compared with more matrices (this is needed for the identification of more complex features in the image). Also note that the smaller image (or filter) is not something you define beforehand, but something the network learns. If your training data contains no straight lines, the filters that result from training probably will not do so either.
To be able to use CNNs for the graphene problem described above the researchers chose to use computer models where each pair of atoms is represented by numbers. When they introduce boron and nitrogen atoms in graphene they usually come in pairs, with a boron and a nitrogen atom next to each other. The researchers therefore chose to represent a boron-nitrogen pair with a "one" and a carbon-carbon pair with a "zero", and thereby constructed an image of the material that different types of CNNs can handle. They also built their networks to give the size of the band gap as output data.
Neural networks need to be trained with relevant data in order to work, something that usually involves automatically comparing the output of hte network to the desired result, calculating the deviation, and adjusting the neural network to give a better answer. In order to train their neural networks the researchers therefore generated several thousand possible configurations and calculated the bandgap of each configuration using density functional theory. The trained networks were then used to predict the bandgap for another batch of configurations where the calculated bandgap was known but that were not used in training. The results turned out to be very promising.
What can we learn from this?
So what is the effect of this study? The researchers have successfully shown that it is possible to predict certain properties of materials using neural networks, which should give those who do research on graphene and other two-dimensional materials another tool that they can use in their research. There is still a long way to go from this particular study to electronics based on graphene and boron nitride, but it may make it easier to know what kind of material configurations are worth working on.
Another interesting thing about this study is what it says between the lines about the limits of machine learning. For the method to work at all the neural network needs to receive all the relevant information in a format it can process, which means that it requires quite a bit of knowledge about graphene and boron nitride to even formulate the problem in such a way that it can be tackled. For example, in this study the researchers have chosen to focus completely on where each boron-nitrogen pair is situated in relation to other pairs and thus discarded all other characteristics of the material, presumably based on what is already known about these materials. (As a example, the relative orientation of neighbouring boron-nitrogen pairs is completely ignored - is it boron-nitrogen-nitrogen-boron or boron-nitrogen-boron-nitrogen? This information has been trimmed away before the neural network is involved.)
A known limitation of neural network is that it is hard to understand why they work the way they do even when they give good results. In a study like this one it would have been very interesting to see what the structures with low or high bandgap respectively have in common, but that is not information that is easy to extract from the neural network itself and the researchers seem not to have made any effort to try. I strongly suspect that a method to understand what is going on inside the networks is necessary for this type of study to be helpful in understanding the studied materials.
As you have probably understood by now, I do not quite agree with Steven Novella about this one fairly limited study showing that neural networks will do decades of research in a few days and take us significantly closer to graphene electronics, but the results in it are still interesting as an example of machine learning in materials physics.
Subscribe to:
Post Comments (Atom)
What do we need 5G for?
During the spring of 2020 the fifth generation of cell phone systems, 5G, was launched in Sweden where I happen to live. Cell phone operator...
-
In December last year I spotted a kind of interesting thing in my Facebook feed: A post from Chalmers University of Technology describing ho...
-
This post is a translation of a post that appeared on my Swedish blog in May 2019. The other week I read an optimistic blog post on the s...
-
This post is a translation of a post previously published on my Swedish-language blog in September 2018. Something I have noticed during ...
No comments:
Post a Comment