# Topic8: Distributed And Parallel AI Computing Framework ## Motivation: * The scale and complexity of models are getting higher and higher, such as GPT-3 with 175 billion parameters, millions of face recognition, and tens of billions of feature recommendations. * It is difficult to split the model manually. For example, developers need to combine information such as calculation amount, cluster size, communication bandwidth, and network topology to construct a parallel mode. * The expression of the parallel mode lacks adaptability, and the simple graph-level model segmentation cannot obtain high-efficiency speedup. It requires the decoupling of algorithm logic and parallel logic. ## Target: ​Driven by super-large models, research key technologies for accelerating distributed training, including but not limited to automatic parallelism, hybrid parallelism, memory optimization, and elastic scaling. Such as achieving heterogeneous automatic parallel efficiency and linear speedup. ## Method: ​We expect the applicant can conduct distributed and parallel AI computing framework research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. ## How To Join: * Submit an issue/PR based on community discussion for consultation or claim on related topics * Submit your proposal to us by email