top of page
Screenshot 2024-08-10 at 11.22.21 PM.png

Optical Connectivity Solutions for Generative AI

Generative AI, also known as GAI, differs from decision-based AI.

Decision-based AI relies on extensive learning to make judgments, such as determining whether a radar scan detects a physical outline. The classic example of decision-based AI error is the joke about mistaking a tree stump for a person.

Generative AI, on the other hand, learns to create and has the ability to generate content. It can write papers, create new artwork, generate films, and more. Many discussions in recent years about models like ChatGPT fall into the category of generative AI.


Decision-based AI and Generative AI


Decision-based AI classifies data, makes distinctions, such as distinguishing between pictures of cats and dogs, while Generative AI analyzes existing data and generates new content, such as generating pictures of cats and dogs. Decision-based AI learns the conditional probability distribution in the data, which is the probability that a sample belongs to a specific category, and then makes judgments, analyzes, and predicts new scenarios. The main application areas of decision-based AI include: facial recognition, recommendation systems, risk control systems, other intelligent decision-making systems, robots, autonomous driving, and so on. For example, in the field of facial recognition, decision-based AI extracts feature information from real-time acquired facial images and matches it with feature data in the facial database to achieve facial recognition. In addition to deep learning, Generative AI also has the ability to generate new content. The first major breakthrough was LLM, Large Language Models, which trains a model with professional knowledge by feeding AI a large amount of human books, knowledge, and other data, and then recombines the knowledge through algorithms. ChatGPT belongs to GAI, emphasizing the generation of text, images, audio, video, data, etc.


For generative AI, there is a 'laser curve' relationship between the training parameters and the quality of the produced images and text, similar to that of optical modules. When the parameters reach a 'threshold,' the quality of the generated work then sharply improves.


The PI curve of generative AI is very similar to that of lasers


The PI curve of generative AI is very similar to that of lasers
The PI curve of generative AI is very similar to that of lasers

ChatGPT 3.5 has 175 billion training parameters, while 4.0 requires around 20 trillion parameters, which translates to approximately 25,000 GPUs (according to NVIDIA's description). It requires a large amount of data for interaction. The purpose of the optical module is to support data interaction. Due to the parallel computing of GPUs, it needs high bandwidth to handle more data volume. It also requires low power consumption to avoid reliability risks caused by overheating of switches and to reduce latency to improve AI efficiency. This requires optical modules to have characteristics of large bandwidth, low power consumption, and low latency.


Large number of GPUs require optical modules to have characteristics of large bandwidth, low power consumption, and low latency.
Large number of GPUs require optical modules to have characteristics of large bandwidth, low power consumption, and low latency.

In the second half of 2023, there was a significant surge in the mass adoption of 800G optical modules, with VCSEL, EML, and SiPh silicon photonics solutions all beginning to see significant market expectations. The demand speed from 800G to 1.6T has also accelerated, with companies demonstrating 1.6T optical modules this year, including those based on EML solutions and silicon photonics integration solutions.


Demo of 1.6T optical module for GAI



Eye diagram and BER curve of 1.6T with 8 X 200G


The following are the demonstration diagrams for 4 channels



In the application scenarios of GAI, the low power consumption and low latency of LPO become very advantageous technical features. Although LPO is not suitable for long-distance transmission, DSP-type optical modules are used for long-distance transmission. LPO/LRO is used for short-distance transmission, striking a balance between cost-effectiveness and power consumption.


800G LPO solution
800G LPO solution

Regarding the future packaging route, Juhua believes that in the era of 1.6T/3.2T, hot-swappable remains predominant. Whether it's DPO, LPO, or LRO, all rely on traditional hot-swappable optical module-based manufacturing industry supply chain processes.

For future higher bandwidth density and lower energy consumption, NPO, CPO, IPO, and other non-hot-swappable methods, equivalent to switching chip (near/shared/internal) packaging forms, will have market value.


Optical module packaging


Optical module packaging
Optical module packaging

In the future, for 3.2T optical modules, 8x400G coherent Lite can be used for slightly longer distances, coherent or simplified coherent modules. These still have more complex designs, higher power consumption, and greater latency compared to IM_DD solutions, and are only used in specific scenarios where traditional PAM4 DD detection cannot meet the requirements.

Based on 16x200Gbps (100GBd), it will be the mainstream form of 3.2T Ethernet modules, with silicon photonics integration technology expected to have a significant market share.

At 200GBd, which is 400Gbps PAM4, consideration needs to be given to differential EML or TFLN thin-film lithium niobate. Differential EML adopts TO can packaging.


In summary, optical modules for data centers serving generative AI applications require high bandwidth capabilities for different transmission distances. In the future, EML, VCSEL, and silicon photonics integration technologies are expected to become the primary technical directions in this field.



25 views0 comments

Comentários


bottom of page