Photolithography is a commonly used manufacturing process that manipulates light to etch features onto surfaces, creating computer chips and optical devices like lenses. However, minute deviations in the process often result in these devices not matching their original designs. To bridge this design-manufacturing gap, a team from MIT and the Chinese University of Hong Kong devised a machine learning-based digital simulator that replicates the photolithography process using real data from the system. This allows a more precise fabrication of the intended design.
The team integrated the digital simulator into a design framework along with a performance simulator for the completed devices. By bridging these two, users can produce optical devices that more accurately match the intended design and deliver optimal task performance. This method could potentially enhance the production of devices for mobile cameras, augmented reality, medical imaging, entertainment, and telecommunications.
The research led by Cheng Zheng, a mechanical engineering graduate student at MIT, leveraged real data despite the high cost and rare precedent. He explained that the efficacy of data from the real-world greatly surpassed that of simulators reliant on analytical equations. Although the acquisition of such data comes at a higher cost and can seem daunting, the resulting precision and efficiency it provides are worth the effort.
The researchers’ technique, referred to as ‘neural lithography,’ uses physics-based equations for the base simulator, further improved by the incorporation of a neural network trained on real data from a photolithography system. This network compensates for the system’s specific deviations, making the final device more closely fit the initial design. Data for this model is gathered through fabrication of designs of varying sizes and shapes, and the final structures measured against design specifications.
The researchers divided the digital lithography simulator into two components: one that models the projection of light onto the device’s surface, and another illustrating how photochemical reactions produce features on the surface. The user dictates the performance goals for the device, and the two simulators work in tandem to show the user how to create a design that meets these.
The team tested their technique by creating a holographic element that generates a butterfly image when hit with light. Compared to devices produced using other methods, their holographic element was nearly perfect and more closely resembled the intended design.
Looking forward, the researchers aim to refine their algorithms to model more complex devices and test their system using consumer cameras. They also hope to extend their technique to different types of photolithography systems.
The research, supported by the U.S. National Institutes of Health, Fujikura Limited, and the Hong Kong Innovation and Technology Fund, utilized MIT.nano’s facilities and will be presented at the SIGGRAPH Asia Conference.