Training an artificial neural network for a specific task can be a computationally intensive and energy-consuming feat. Researchers at a U.S. university have demonstrated that such training can be accomplished on a silicon photonic chip.
A new way to train
In previous experiments on optical neural networks, other researchers performed the network training on a traditional computer and then transferred the results onto a photonic chip. Here, the Stanford group performed the algorithm physically by propagating an error signal through the circuits of the chip. According to Hughes, this method “should make training of optical neural networks far more efficient and robust.”
For hardware, the Stanford team used a silicon photonic architecture similar to a programmable processor describe last year at the Massachusetts Institute of Technology, USA. Basically, it’s a mesh of tiny, tunable Mach-Zender interferometers. For software, the researchers derived the algorithm from the mathematics of the optical circuit, going all the way back to Maxwell’s equations.
Fine-tuning
The “teaching” of the network involves sending a laser pulse one way through the optical circuit, measuring how the signal was changed from the predicted signal, then adjusting the circuit and sending the optical signal back. Based on the received signal, the artificial neural network adjusts itself by tweaking its circuitry via optical phase shifters. This tuning happens by “applying an electrical voltage to a heating element on the chip’s surface,” says Hughes, “which changes the optical properties of the waveguide slightly.” Tiny photodetectors near the phase shifters measure the intensity of the signal passing through the chip, giving the algorithm the gradient information needed for training and optimization.
For more information: Optica, doi:10.1364/OPTICA.5.000864