By combining purposeful materials and neural networks, researchers at EPFL have shown that sound can be used in high-resolution images.
The image allows us to depict an object through far-field analysis of the light and sound waves it transmits or radiates. The shorter the wave, the higher the resolution of the image. However, the level of detail is limited by the size of the wavelength in question ̵1; so far. Researchers at EPFL’s Wave Engineering Laboratory have successfully demonstrated that a long and therefore inaccurate wave (in this case, sound waves) can elicit details 30 times smaller. compared to its length. To achieve this, the team used a combination of metamaterials – specially designed elements – and artificial intelligence. Their research, has just been published on Physical evaluation X, is creating exciting new possibilities, especially in the field of medical imaging and biotechnology.
The group’s groundbreaking idea was to bring together two separate technologies that had previously pushed the boundaries of the image together. One of them is metamaterial: purpose-built elements can, for example, focus on precisely wavelengths. That said, they are known to be ineffective at absorbing signals cluttered in a way that makes them difficult to decode. The other is artificial intelligence, or more specifically, neural networks that can quickly and efficiently process even the most complex of information, albeit with a related learning curve.
To exceed the diffraction limit known in physics, the team – led by Romain Fleury – performed the following experiment: first they created a lattice of 64 miniature speakers, each of which. can be activated in pixels in an image. They then used the lattice to reproduce the acoustic images of the digits 0 through 9 with extremely precise spatial details; Images of digits are included in lattice drawn from a database of about 70,000 handwritten examples. Across the lattice, the researchers placed a pocket containing 39 Helmholtz cavities (10 cm spheres with a hole at one end) to form a metamaterial. Sound produced by the lattice is transmitted by metamaterials and captured by four microphones located several meters away. The algorithms then decode the sound recorded by the microphone to learn to recognize and redraw the original digital images.
One advantageous downside
The team has achieved a success rate of almost 90% with their experiment. “By creating images with a resolution of just a few centimeters – using sound waves approximately one meter in length – we have overcome the diffraction limit very well,” said Romain Fleury. “Also, the tendency for metamaterials to absorb signals, which were seen as a major downside, turned out to be an advantage in the presence of neural networks. We found that they function well. more when there is more absorption. “
In the realm of medical imaging, using long waves to view very small objects can be a big breakthrough. “Long waves mean doctors can use much lower frequencies, which leads to effective sonography methods even through dense bone tissue. When it comes to imaging using electromagnetic waves, waves.” length is less dangerous to the health of the patient.For these types, Fleury says, we are not going to train neural networks to recognize or reproduce numbers, but instead organic structures. muscle.
The new metamaterial controls the sound to improve the acoustic image
Bakhtiyar Orazbayev et al. Long-field wavelength sound imaging by deep learning, Physical evaluation X (Year 2020). DOI: 10.1103 / PhysRevX.10.031029
Provided by Ecole Polytechnique Federale de Lausanne
Quote: Deep learning and metamaterials that make invisible materials visible (2020, August 10) retrieved August 10, 2020 from https://phys.org/news/2020-08- deep-metamaterials-invisible-visible.html
This material is the subject to have fake rights. Other than any fair dealings for academic or personal research purposes, no part may be reproduced without written permission. The content provided is for informational purposes only.