In today’s digital age, computational tasks are becoming increasingly complex. This, in turn, has led to an exponential growth in the power consumed by digital computers. Thus, it is necessary to develop hardware resources that can perform large-scale computing in a fast and energy-saving manner.
In this regard, optical computerswhich use light instead of electricity to perform the calculations, are promising. They can provide lower latency and lower power consumption, and take advantage of parallelism optical systems You have. As a result, researchers have explored different designs for optical computing.
For example, optical refraction network It is built through a combination of optics and deep learning to perform visually complex computational tasks such as image classification and reconstruction. It consists of a stack of structured diffraction layers, each with thousands of reflective features/neurons. These passive layers are used to control light-matter interactions to modulate the input light and produce the desired output. The researchers train the diffraction network by optimizing the profile of these layers using Deep learning tools. After manufacturing the resulting design, this frame functions as a self-contained visual processing unit requiring only an input light source to operate.
To date, researchers have succeeded in designing monochromatic (single-wavelength illumination) diffraction networks to implement a single linear pattern. transformation (matrix multiplication). But is it possible to perform several linear transformations simultaneously? The same UCLA research group that first introduced diffraction optical networks has recently addressed this question. In a recent study published in advanced photonicsthey used a wavelength multiplexing scheme in a deflection optical network and showed the feasibility of using broadband diffraction Healer To perform massively parallel linear transformations.
UCLA Professor Aydoğan Özkan, Head of the Research Group in the Samueli College of Engineering, briefly describes the architecture and principles of this optical processor: “The broadband reflex optical processor has field-of-view inputs and outputs using NI na pixels, respectively. They are interconnected by successive structural refractive layers, which are made of negatively conductive materials. A predetermined set of N.w The separate wavelengths encode the input and output information. Each wavelength is assigned to a unique target function or a complex value linear transformation,” he explains.
These objective transformations can be specifically assigned to distinct functions such as image classification and segmentation, or they can be customized to compute different convolutional filter operations or fully connected layers in a neural network. All of these linear transformations or desired functions are simultaneously executed at the speed of light, where Assign each desired function to a unique wavelength. This allows the large-scale optical processor to compute with maximum throughput and parallelism.”
The researchers show that a multi-wavelength optical processor design can approximate Nw Unique linear transformations with little error when the total number of diffraction characteristics N is greater than or equal to 2NwnIna. This conclusion is confirmed by Nw > 180 distinct transitions through Numerical simulation It is valid for materials with different dispersion properties. Moreover, using a larger N (3NwnIna) increased nw Plus about 2,000 unique transformations that are all performed visually in parallel.
Regarding the prospects for this new computing design, Özkan says, “Such massively parallel, multi-wavelength refractive processors will be useful for designing intelligent high-throughput machine vision systems and hyperspectral processors, and can inspire many applications in various fields, including medical imaging.” bio, remote sensing and analytical chemistry and materials science”.
Jingxi Li et al, Massively parallel global linear transformations using a multi-wavelength refractive optical grating, advanced photonics (2023). DOI: 10.1117/1.AP.5.1.016003
the quote: Deep Learning Designed Diffractive Processor Computes Hundreds of Transformations in Parallel (2023, January 9) Retrieved January 10, 2023 from https://phys.org/news/2023-01-deep-learning-designed-diffractive-processor-hundreds. programming language
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.