Saturday, August 1, 2020

What do we need 5G for?

During the spring of 2020 the fifth generation of cell phone systems, 5G, was launched in Sweden where I happen to live. Cell phone operators promise higher speeds and generallt better cell phone service, but in the tech industry there are higher expectations. So what is it we're going to use 5G for?

Before we get to what people want to do with 5G it can be a good idea to think a bit about what it is. On a very general level, a cell phone system needs wireless communication, in the shape of signals to and from a larger or smaller number of cell phones (or smart watches, surfpads, etc.), and some type of core system or network that can pass on information to the ordinary phone network or internet. In between thes two parts are the base stations. For most of us, base stations are the most visible part of the system (with the exception, of course, of our own cell phone). Communication between base stations and cell phones uses electromagnetic waves, generally radio- or microwaves. A "generation" of cell phone systems are a kind of agreement or standard covering everything from which frequencies of electromagnetic radiation are used and how base stations are built to how information sent over the network should look in order to be recognized and correctly interpreted. It is not a matter of a single invention or technological advancment but of an entire package of changes that work together in different ways.

The purely technological change that has gotten the most attention when it comes to 5G is the plan to use higher frequencies for communication, which in itself can make it possible to transfer data at a higher rate, but 5G also opens the possibility to package the information that is to be sent in a more efficient way, restructure the core network and start using transmit and receive antennas with a larger number of antenna elements in the base stations. With more antenna elements it becomes easier both to transmit electromagnetic waves in a specific direction and to distinguish between signals received from different cell phones (it works a little like how you determine a direction using radar). The more efficiently signals from different cell phones can be distinguished, the larger the number of cell phones that can be handled by the same base station at the same time. Compared to earlier generations of cell phone systems, these changes are expected to allow information transfer at a higher rate, space for more concurrent users in high-traffic areas and smaller delays (or latency) in the information transfer.

If you use the 4G network in your everyday life with few or no problems you might ask if any of these improvements are actually necessary. The thing is, the cell phone operators and other actors building the 5G network are not just after catering to the needs of cell phone users today. On one hand, they seem convinced that the type of cell phone users that exist today - you and I on our cell phones, calling and surfing - will come to expect higher speeds and more reliable connections. Maybe we will also use more units per person, such as a cell phone plus a smartwatch with its own SIM card. This is something operators want to be prepared for in order not to lose customers. On the other hand, they are also optimistic about a different way of using the cell phone network in the future - namely, communications between machines.

Imagine that you are driving a car that is equipped with advanced technology to help you as a driver - maybe it is also self-driving in simple situations. Suddenly something changes in the environment - maybe the traffic slows to a standstill or there's a rainstorm that reduces visibility and makes the road slippery. In most cases you will notice these things when you get to them (though I guess Google maps might warn you about the traffic jam). If the cars themselves could send information to each other those who have reached the traffic jam or the rainstorm could automatically warn other cars in the vicinity. If they could receive data from the surrounding infrastructure - transmitters in road signs or something - the cars might receive even more useful information. Is the traffic jam due to construction, for instance, or has the speed limit been temporarily changed on a stretch of road? That kind of information could reach the car itself and appear on e.g. heads-up displays. For a (partially) self-driving car it would make it easier to adapt the driving to circumstances or warn the human driver that they will have to intervene. This would, however, require reliable and fast wireless communication with low latency between a large number of users (cars and infrastructure transmitters). These are some of the things promised by 5G technology, and it is hoped that 5G will improve this vehicle-to-vehicle and vehicle-to-infrastructure communication.

Another frequently mentioned application for 5G is in so-called smart cities, which among other things means having a large number of sensor deployed to collect data on everything from traffic flow to water leaks. The sensors are supposed to be able to communicate via the cell phone network to flag problems or deliver data. There are also people who want to use 5G to improve people's health, for example by using wearable sensors, and to take automation of factories further. This overview from IEEE gives a fairly good idea about what people hope to be able to accomplish (it's also two years old, but a quick search shows that similar ideas are still relevant - see here, here och here).

As is always the case with visions for the future, it is important to remember that no one knows exactly what future developments will look like. Some applications that people have high hopes for will probably turn out to be hard to realize or be less useful than previously thought, and at the same time other ideas will come to the front when people start to use the technology and see the possibilities. 

Sunday, January 12, 2020

What is there to say about carbon capture and storage?

In December last year I spotted a kind of interesting thing in my Facebook feed: A post from Chalmers University of Technology describing how a few of their researchers, together with some other researchers from Stockholm University, had developed a new, promising material for carbon capture and storage (CCS). I found this interesting partly for materials science reasons and partly because by and large, I hear surprisingly little about CCS technology in general. Other technologies associated with reducing carbon dioxide emissions, like rechargeable batteries, fuel cells and biofuels, get substantially more attention. So what is there to say about CCS?

The idea behind CCS is to separate out carbon dioxide from a mix of gases, like the flue gases from coal- and oil-fired power plants, and thereby stop it from entering the atmosphere. Instead the carbon dioxide is taken away and stored. The ocean floor and certain types of bedrock are seen as promising storage sites, and countries like Norway are conducting tests where carbon dioxide from natural gas fields are stored in the bedrock of the ocean floor around the original deposits. All steps in this process are of course associated with technical challenges, costs and risks - for example, if the carbon dioxide escapes from its long-term storage it will both negate the benefits of capturing it in the first place and potentially harm people and the nearby environment. According to a review paper from 2018, published in the Royal Society of Chemistry journal Energy and Enviromental Science, attempts to use CCS have frequently turned out more expensive and time-consuming than expected, partly due to installation costs and the need to build up infrastructure. Successful applications of CCS, on the other hand, offer the possibility of large and rapid reductions of carbon dioxide emissions.

The new material presented by the Chalmers researchers is supposed to be used in the carbon capturing step. To understand what problem they are trying to solve we can return to the review paper linked above. Among many other things, it describes how current CCS technology relies on aqueous solutions of chemicals to separate out carbon dioxide from other gases. This process has been used for a long time to separate out carbon dioxide from natural gas and thereby get a cleaner and better natural gas product to burn, but it can be adapted for exhaust or flue gases from for example power plants. Unfortunately, the carbon capturing in itself consumes extra energy since the carbon dioxide, once captured, also has to be extracted from the aqueous solution so that it can be transported away. This may be accomplished by heating the solution or changing the pressure. Additionally, many of these chemical solutions are toxic and in higher concentrations also corrosive, which can cause problems both with wear and tear and in the case of an accident where the solution leaks out.

One way to get around this problem would be to develop solid materials that can selectively absorb carbon dioxide. These could, apart from the lower risk of leakages, potentially be easier to install in existing plants and factories and also require less energy during the extraction of the captured carbon dioxide. The paper published by the Chalmers researchers presents one such material, consisting of two component parts. One component is a porous network of gelatin and cellulose (yes, cellulose as in wood, although it has been processed into a different form) which gives the material structure and rigidity while still being porous enough to easily let gases like carbon dioxide pass through. The other component is a powder of the mineral zeolite, which consists of silicon and aluminium atoms and also has a structure full of large holes. Carbon molecules can attach themselves to the surface of this aluminium-silicon mineral, and thanks to the porous structure there is a lot of surface for them to attach to. This makes it a good carbon capture material.

The combination of zeolites, gelatin and cellulose in the new material is supposed to make it easy to handle and install, cheap, and also mostly biodegradable. It could therefore be part of the development of cheaper and more easily applicable CCS technology, which in turn might lead to wider use and more attention for CCS. Of course, this connects to the other question that comes up when CCS is actually discussed, namely if that is what we want. Something that separates CCS from technologies that receive more attention, like fuel cells and renewables, is that the latter two have a clear role in a society that has transitioned away from fossil fuels. CCS, on the other hand, is all about still producing carbon dioxide but not actually letting it out. The interest, or lack of interest, in CCS is influenced by if this is seen as a delay or obstacle to a necessary transition away from producing carbon dioxide or as a way to reduce carbon dioxide emissions faster than this transition can take place. There seems to be quite a few things to say about carbon capture and storage - from different perspectives.

Sunday, December 22, 2019

A walking piece of plastic, and conditioning

It has, for various reasons, been a while since the last entry on this blog. I was planning to compensate a bit for the delay with a deep dive into the technology behind carbon capture and storage, but then I saw video of a walking piece of plastic on the website of Sweden's public service tv channel, SVT. Apparently the plastic material is of a type that normally bends when heated, but according to the video and article scientists have "taught" the material to instead react to light. It can therefore drag itself forwards like an inchworm if it is periodically illuminated.

So how exactly do you teach a piece of plastic to react to light?

The research behind this thing has also been presented in a paper in the journal Matter, a paper that fortunately is freely available (kudos!). According to the paper the piece of plastic in question is something called a liquid crystal polymer network. Polymers are long molecules consisting of smaller, identical pieces repeating in a long chain - cellulose for instance is a polymer consisting of long chains of identical glucose molecules. Plastics are generally polymers, like polyethene (chains of ethene molecules) or polystyrene (chains of styrene molecules - "poly" here just means "multiple" or "many").

Liquid crystals are also long molecules, some of them are in fact also polymers. The unique thing about them is that they behave partly like liquids, in that they flow between containers and change shape to fill the container, and partly like (what physicists mean when they talk about) crystals, in that the molecules arrange themselves in regular repeating patterns. Liquid crystals have a whole bunch of fascinating properties, in fact the last post on this blog was about their optical properties and what they could mean for lidars.


There are multiple earlier studies showing how you can connect a network of polymer molecules, which gives rigidity and structure to the final material, with liquid crystals (for instance there is this review article, although it's fairly technical). This can for example give you materials that bend in a specific way when heated, like the walking plastic does. The reason is that when the otherwise so well-ordered liquid crystal molecules are heated they move from their well-ordered positions to something more disorderly. If they would start out lying after one another in a long line, increased disorder would mean the line would start to bend and therefore cover a smaller distance. This part of the material would then contract. On the other hand, if the liquid crystal molecules are lying parallel to each other increased disorder may require them to move apart, so this part of the material will expand. In the walking piece of plastic one side has the liquid crystal molecules arranged in lines, while the other has them more or less parallel, so when the material is heated one side will contract and the other will expand. This leads to the material bending significantly when heated



We thus start out with a plastic materal that bends in a very specific way when it is heated. But how has it been made to react to light as well? The paper reveals that one side of the material was coated with a dye that absorbs light in a certain range of wavelengths. The absorbed light causes the dye to heat up and transfer heat to the surroundings, in this case the plastic. However, when it is just in a layer on the surface of the plastic it does not cause a temperature increase large enough to get the plastic to bend. Instead, the material was exposed to both light and heat at the same time. The heat lead both to the plastic bending and to the dye diffusing into the plastic (like the red pigment in tomato sauce spreading into the plastic of your leftover containers - if you use a plastic containers, that is). The more the dye diffused and spread into the plastic, the more efficiently it contributed to heating the plastic, further increasing the temperature. Once the dye was spread more or less evenly throughout the plastic, heating via the dye was efficient enought that once the material was cooled down and straightened out, shining a light on it caused enough of a temperature increase for it to bend again.

So does this mean that the piece of plastic has been taught something? The scientists themselves compare the process to conditioning and draw parallels to Pavlov's experiments on dogs. Pavlov's experiments showed that the dogs, who normally started to drool when given food, could also be made to drool at the sound of a bell, provided that they were first conditioned by hearing the bell while they were being fed. The scientists compare the heat that causes the plastic to bend to the food, the light to the bell and the simultaneous exposure to light and heat to the conditioning phase.

They also admit that there are problems with this comparison and that the plastic is (obviously) a much simpler system. An important difference is that Pavlov's dogs reacted also without any food being present, while the plastic isn't actually bending in the absence of heat - the heat is just provided in a different way. Another difference is that the piece of plastic cannot spontaneously "forget" its "conditioning" the way a living organism would - the dye will stay where it is. One could also have skipped the "conditioning" altogether and just mixed in the dye from the start, or exposed the material to only heat (rather than heat and light) to get the dye into the plastic, and achieved the same end result. All things considered, it is doubtful whether this should really be seen as a form of conditioning, and thereby learning. Maybe this research will give rise to a simple material model for some kinds of learning, but it is clearly very far from the kinds of learning seen in organisms.

Tuesday, July 16, 2019

Why put liquid crystals in LiDARs?

When trying to develop self-driving cars, companies have equipped their test and demo vehicles with a large number of sensors of various kinds. One popular type of sensor is the LiDAR, a device that sends out narrow pulses of laser light and measures the time it takes for the light to be reflected by something in the surroundings and come back. From information about the elapsed time and the direction of the laser it is easy to determine the location of the point where the pulse was reflected. If you then sweep the beam over an area you will get a series of measurements giving the nearest distance to an object at each angle. Of course this only covers two dimensions, so several lasers are mounted on top of each other in order to get a three-dimensional map of the surroundings. A LiDAR can produce a relatively high resolution mapping even at large distances, say a few hundred meters, so it is not surprising that they are popular among people who are trying to supply self-driving algorithms with sufficient amounts of high-quality data.

On the other hand there are also companies, notably Elon Musk's Tesla, that prefer not to use LiDAR. The technology has a number of drawbacks, such as that every single LiDAR is very expensive and that they are prone to breaking often due to having many moving parts. The LiDARs from Velodyne simply have narrow-beam lasers that rotate 360 degrees, other companies use mirrors and microelectromechanical systems to scan laser beams over a smaller field of view, but they all seem to need moving parts, which tend to wear out quickly.

In March this year, however, the American company Lumotive claimed to have come up with a way to dispense with the moving parts completely. According to IEEE Spectrum, they accomplish this by using a liquid crystal metamaterial to slow down selected parts of the laser beam, thereby shifting its phase relative to the other parts. This means that the peaks and troughs of the electromagnetic wave in different parts of the beam will occur in different places, reinforcing each other in some directions while cancelling each other out in others. By controlling how much which parts of the beam are slowed down it is possible to control the direction of the beam (incidentally, this is what I tried to describe in my post about radar and the spinning thing, except now it's with infrared light instead of microwaves). 

If this turns out to work well it would be extremely useful, but what exactly is a liquid crystal metamaterial? We know that a metamaterial is an artificial material that is constructed out of small bits of other materials, usually in a way that gives it very exotic properties. So apparently, we are dealing with an artificial material made with liquid crystals.

Liquid crystals are substances, usually consisting of very long molecules, that in some ways behave like liquids and in other ways like crystals. (This is "crystal" in the scientific sense, meaning that atoms are arranged in a regular three-dimensional grid.) For example, liquid crystals will often flow and change shape like liquids but the molecules will be arranged in a regular, crystal-like structure. Both Lumotive's LiDAR and the more well known application of liquid crystals, namely liquid crystal displays (LCD), make use of the ease with which the molecular orientation in the liquid crystal can be changed (because it is a liquid) in combination with the special optical properties that arise from the crystalline structure.

Since liquid crystals consist of long molecules they tend to be very anisotropic, meaning that the properties of the material are different depending on if you look at it along the length axis of the molecules, perpendicular to it, or from some other angle. When it comes to optical properties this means that the speed of light propagation in the liquid crystal, and therefore the refractive index, is different if the light is propagating along the molecules or across them. The effect also depends on the relation between the molecule orientation and the polarization of the light, in a way that makes liquid crystals able to change the polarization of light that passes through them.
This ability to change the polarization of light is what is used in LCDs. The effects on polarization is dependent on the orientation of the long molecules of the liquid crystal, which can be changed by applying an electric field. By sandwiching a liquid crystal between polarization filters and manipulating the molecular orientation it is possible to turn transmission of light through the structure on, by ensuring that the polarization of the light is aligned with both polarization filters, or off, by ensuring that the polarization of the light is perpendicular to one of the filters. This is the basics of LCD display pixels.

The metamaterials in the Lumotive LiDARs, on the other hand, appear to make use of the anisotropy of the refractive index of liquid crystals*. Just like in LCDs the orientation of the molecules of the liquid crystal is controlled through application of an electric field, but in order to tune the refractive index to a specific value instead of achieving a shift in polarization. This can be seen as choosing if the light will propagate along the molecules, perpendicular to the molecules or at some angle in between, and making use of the difference in propagation speed for the different cases.

The other parts of the metamaterial appear to be dielectric resonators consisting of silicon elements with the liquid crystal material sandwiched in between. When the refractive index of the liquid crystal is changed, this changes the properties of the dielectric resonator as a whole. If a laser beam is reflected from a surface full of these dielectric resonators, the phase of each part of the reflected beam will depend on the refractive index of the liquid crystal in the resonators. By changing the refractive index according to some pattern, the direction of the reflected beam can be controlled.

If this idea turns out to work well in practice it could potentially make LiDARs much more affordable, and therefore probably much more common in vehicles. How much further along the path towards autonomous driving it can take us of course remains to be seen.




* I must admit that I am guessing a bit here, since there are many recent patent applications for similar ideas and I have not been able to find exactly which ones belong or are licensed to Lumotive. The one that seems most likely to form the basis of Lumotive's work is WO2018156688A1, which has the Lumotive CTO listed as an inventor.

Sunday, June 2, 2019

More on machine learning and materials

In the last post, I looked at a paper in which machine learning had been used to predict properties of doped graphene. One of my thoughts on this was that the study seemed unsatisfying because it only concluded that the neural network could be trained to make the prediction, but there was no attempt to figure out how it made the prediction - even though that might have told us something interesting both about the network and about doped graphene.

Oddly enough the paper contained references to an interesting study where researchers have done exactly that, albeit for a very different problem. This paper by Ziletti et al, published in Nature Communications in 2018, considers the problem of finding a method to classify crystal structures that is robust and not dependent on a myriad of hand-tuned thresholds and parameters. Along the way they adapt a method for probing the internal workings of the neural network to their own application.

Admittedly, crystal structure classification doesn't sound like the most exciting problem in the world, but within materials science and condensed matter physics it is very important. A lot of materials are crystalline, i.e. they consist of periodically repeating arrangements of atoms. Knowing what these periodically repeating arrangements look like and in what ways they are symmetric is important for understanding, investigating and modelling the material - and often for figuring out what it can be useful for or how it can be improved. It is also a rather tedious process with a lot of potential for error due to noisy measurements and the fact that real-world materials are not perfect crystals, but will always contain defects of various kinds. In the study, the aim is to develop a robust classification method that can handle the presence of defects without misclassifying the structures.

The first step is to decide what sort of input data to use. This is more complex than it seems, since just using atomic positions might make the classifier inherently sensitive to defects. Instead, the researchers have chosen to use simulated diffraction patterns, which condense the information about atom placements and inter-atomic distances to a number of bright spots. (If you recall being shown diffraction in some high-school physics class, this is the same thing only with periodic atomic structures instead of the slits and using electromagnetic radiation with much shorter wavelenght). The diffraction patterns are fed into a convolutional neural networks with multiple layers, which extracts features from the patterns and then classifies the patterns based on these features. Tests of the networks show good performance, even when the data was noisy or the structures contained a high number of defects.

Now for the interesting part. As described in the previous post, feature extraction in a convolutional neural network can be likened to a process where small sections of an image are compared to a smaller image, and a positive response is given if they match. The output of the first comparison is then used in another comparison that extracts more complicated features, and so on. Training of the neural network amounts to adjusting the smaller images, or filters, to respond to features of the image that enable the network to make the correct classification. If picking out straight lines enable correct classification, at least some of the filters will end up responding to straight lines. If curves are important, some of the filters will respond to curves.

This also means that when the neural network has been trained and an image is fed into it, at some deep level in the neural network there will be a vector representing the features that are present in the image and that the neural network has been trained to extract and classify. This vector could tell us exactly what information the network is using when classifying a particular image, but due to the complexity of the preceding layers of the network it is hard to interpret. It is, however, possible to start from this representation of the extracted features and essentially go through all the layers of the network in reverse, finally arriving at a generated picture that shows just the features picked out by the network in a way that can easily be recognized by humans (these pictures are also known as attentive response maps). Using this method, the researchers found that the neural network had in fact learned to use many of the characteristics that humans use when classifying crystal structures, such as distances between atomic planes.

So why is this interesting? For one thing, it demonstrates a method of checking if the classification performed by the network is based on something we would consider significant, or if it has learned to classify based on something obviously irrelevant - say, some kind of noise that is more common in some types of images than others. It also suggests that we could use neural networks not just to make predictions or classify data points, but also to understand the differences between the data points better. It is after all entirely possible that the networks could extract some feature that we do not realize the importance of yet. Personally, I think this is the way to use machine learning in physics - not just looking for the how, but also the why.

Finally, I should mention that the method in itself is adapted from a 2018 paper on classifying X-ray images of body parts, which in turn references a much earlier paper on understanding how convolutional networks classify more ordinary images. It is perhaps telling that it was picked up in the medical field, since knowing that neural networks classify based on the right information could be vital there.

Wednesday, May 29, 2019

Machine learning and materials science

This post is a translation of a post that appeared on my Swedish blog in May 2019.

The other week I read an optimistic blog post on the subject of machine learning by the American skeptic and neurologist Steven Novella. He wrote, among other things, about an American research group that has trained a neural network to determine properties of doped graphene, that is graphene where some of the carbon atoms are replaced with other elements, from the placement of the dopant atoms. Novella chose to portray this as the neural network being able to perform decades of research in the course of a few days, and hinted that this could give us practical applications of graphene considerably earlier than if no machine learning had been used.

As someone who is interested in both graphene and machine learning, I obviously had to find the scientific paper the group had published and try to figure out what they had actually done.

The research question
The paper in question is published in npj Computational Materials (it is also open access, by the way) and according to the title it deals with the prediction of the so-called band gap of materials that are a combination of graphene and boron nitride. Boron nitride is a material that consists of two types of atoms, boron and nitrogen, arranged in a hexagonal lattice just like the carbon atoms in graphene. Also just like graphene, boron nitride can be produced as just a single, super-thin layer of atoms. These similarities between the two materials are a part of the reason why people try to combine them.

Another part of the reason is that while graphene has excellent electrical conductivity, it is very difficult to get boron nitride to conduct electricity at all. This difference is due to that it requires fairly little energy to get the electrons in graphene moving, while the electrons in boron nitride need a lot of extra energy to get to a state where they are mobile. This energy boost that is needed for the electrons to be able to move is also a measure of the band gap (corresponding to a gap in energy between different states that the electrons can be in). Graphene thus has an extremely small band gap, boron nitride has a large bandgap. By combining the two materials people want to create a hybrid material with a band gap of a size that is useful for e.g. applications in electronics.

However, it turns out that you cannot just replace a few carbon atoms with boron and nitrogen. How the boron and nitrogen atoms are arranged in relation to each other matters for how large the band gap of the resulting material turns out to be. What the American research group has done is try to predict the size of the bandgap based on the placement of boron and nitrogen atoms using artificial neural networks, more specifically so-called CNNs or convolutional neural networks.

The neural networks

CNNs are a type of neural networks that have been developed to pick out characteristic features from images and then classify the images based on the features - they are for example useful for facial recognition and when self-driving cars need to tell the difference between a pedestrian and a road sign. The basic principle of a CNN is similar to comparing small regions of a picture with smaller, simpler images and give a positive response if they are similar. If for example you have a picture of a house and the smaller image has a vertical line you might get a positive response when you get to the corners, doors or windows since their depictions contain straight, vertical sections. In  CNN, however, you have to represent both images as matrices of numbers, and you also have several layers where the result of one comparison to a smaller image in turn is compared with more matrices (this is needed for the identification of more complex features in the image). Also note that the smaller image (or filter) is not something you define beforehand, but something the network learns. If your training data contains no straight lines, the filters that result from training probably will not do so either.



To be able to use CNNs for the graphene problem described above the researchers chose to use computer models where each pair of atoms is represented by numbers. When they introduce boron and nitrogen atoms in graphene they usually come in pairs, with a boron and a nitrogen atom next to each other. The researchers therefore chose to represent a boron-nitrogen pair with a "one" and a carbon-carbon pair with a "zero", and thereby constructed an image of the material that different types of CNNs can handle. They also built their networks to give the size of the band gap as output data.

Neural networks need to be trained with relevant data in order to work, something that usually involves automatically comparing the output of hte network to the desired result, calculating the deviation, and adjusting the neural network to give a better answer. In order to train their neural networks the researchers therefore generated several thousand possible configurations and calculated the bandgap of each configuration using density functional theory. The trained networks were then used to predict the bandgap for another batch of configurations where the calculated bandgap was known but that were not used in training. The results turned out to be very promising.

What can we learn from this?

So what is the effect of this study? The researchers have successfully shown that it is possible to predict certain properties of materials using neural networks, which should give those who do research on graphene and other two-dimensional materials another tool that they can use in their research. There is still a long way to go from this particular study to electronics based on graphene and boron nitride, but it may make it easier to know what kind of material configurations are worth working on.

Another interesting thing about this study is what it says between the lines about the limits of machine learning. For the method to work at all the neural network needs to receive all the relevant information in a format it can process, which means that it requires quite a bit of knowledge about graphene and boron nitride to even formulate the problem in such a way that it can be tackled. For example, in this study the researchers have chosen to focus completely on where each boron-nitrogen pair is situated in relation to other pairs and thus discarded all other characteristics of the material, presumably based on what is already known about these materials. (As a example, the relative orientation of neighbouring boron-nitrogen pairs is completely ignored - is it boron-nitrogen-nitrogen-boron or boron-nitrogen-boron-nitrogen? This information has been trimmed away before the neural network is involved.)

A known limitation of neural network is that it is hard to understand why they work the way they do even when they give good results. In a study like this one it would have been very interesting to see what the structures with low or high bandgap respectively have in common, but that is not information that is easy to extract from the neural network itself and the researchers seem not to have made any effort to try. I strongly suspect that a method to understand what is going on inside the networks is necessary for this type of study to be helpful in understanding the studied materials.

As you have probably understood by now, I do not quite agree with Steven Novella about this one fairly limited study showing that neural networks will do decades of research in a few days and take us significantly closer to graphene electronics, but the results in it are still interesting as an example of machine learning in materials physics.

Why do we not have invisibility cloaks?

This post is a translation of a post that was published on my Swedish-language blog in December 2018.

In 2018, it was ten years since I stumbled upon the opportunity to write my Bachelor's thesis on the subject of photonic crystals. Photonic crystals are a metamaterial, an artificial material created by putting together small pieces of ordinary materials to get something with unusual properties. A photonic crystal can for example efficiently block electromagnetic radiation (light, microwaves etc.) in a specific frequency band, while frequencies outside the band get through with relatively little loss. Materials with this property could have many interesting applications, but the application that caught the attention of popular science publications at the time was that metamaterials could maybe, maybe be used to make things invisible. The obvious question is then, have we gotten any closer to having invisibility cloaks over the last 10 years?


Ten years ago it was photonic crystals specifically that people were talking about. A photonic crystal consists of at least two materials that have different optical properties - usually materials that are not electrically conducting and that have different refractive indices. It could for example be two different types of plastic or plastic and glass. The two materials are placed in a periodic structure, which for example could mean a stack of thin slabs where every other is plastic and every other is glass (Wikipedia has some excellent figures that shows what this can look like). The thickness of the slabs should correspond roughly to about half the wavelength of the electromagnetic radiation that you want to stop from propagating in the material.

Just stopping radiation of a specific frequency from propagating is, however, not enough to make something invisible. Preferably, we would like to lead the light around the object that we want to render invisible, so that the eye of people looking at it only receive light from whatever is behind the object. Theoretically this is possible with photonic crystals, due to something called effective negative refractive index.

When light passes from one material to the other, for example from air to glass, it does not continue in a straight line but changes direction with a specific angle. The angle depends on the difference in speed of light between the two materials, and this difference is expressed in terms of the refractive index. In normal materials the refractive index is positive, but in metamaterials radiation at some frequencies can change direction with a much larger angle than is ever possible in a normal material (see figure). This is expressed as the metamaterial having an effective negative refractive index. By adjusting the metamaterial to get a specific value of the refractive index it is possible to use this to control the path of light, for example to lead it around an object you want to conceal.


However, there are two problems with this. Firstly, the frequecy range where photonic crystals have a negative refractive index is usually very narrow, so a given metamaterial will only work for a small number of frequencies (say for example you would be invisible in green light but not in red or blue). Secondly, even if every layer in the photonic crystal is very thin you need a lot of layers to get a good effect. This means that even for visible light, with wavelengths below one micron, an invisibility cloak based on photonic crystals would be quite unwieldy.

Since my first contact with this field, another type of metamaterial has become more popular. Instead of mostly using non-conducting materials, the materials are built up from metallic elements or even small electric circuits. This category of metamaterials mostly builds on various resonance phenomena that can occur in metallic materials that are exposed to electromagnetic radiation. A common example is so-called split-ring resonators, that consist of to rings of metallic material where one is smaller and placed inside the other one. The rings both have an opening and they are placed so that the openings are opposite of each other. When this structure is exposed to electromagnetic radiation, this causes an electric current in the rings (induction) and electromagnetic charge will build up around the openings (capacitance). These currents and charges in turn affect surrounding electric and magnetic fields, i.e. the radiation.

A difference between traditional photonic crystals and metamaterials with metallic components is that with the metallic components the individual elements in the metamaterial, like the ring resonators for example, should be much smaller than the wavelength of the radiation that is to be stopped or controlled. This is a good thing if you want to make an "invisibility cloak" for lower frequencies (i.e. where the the wavelengths are larger) but for visible light you might run into the problem that it is still fairly difficult to make large quantities of components that are just a few tens of nanometres in size, especially since they need to be manufactured with high precision. In addition, the metallic metamaterials also only work in a limited frequency band that depends on the size of the elements. The frequency band is a bit wider than for photonic crystals, but from what I have been able to find it would still be difficult to cover for example the entire visible range. Thus, no invisibility cloak à la Harry Potter yet.

As a final note, I should probably mention that even if an invisibility cloak for the visible spectrum would be cool, that is not really what drives research in this area. Most papers I have found deal with electromagnetic radiation with wavelengths from a few millimetres up to several centimetres, which are used for e.g. radar and communication. In this frequency range metallic metamaterials work fairly well and have given applications for those who for example want to make airplanes harder to detect with radar.

What do we need 5G for?

During the spring of 2020 the fifth generation of cell phone systems, 5G, was launched in Sweden where I happen to live. Cell phone operator...