Topic7: Tensor Differentiable Computing Framework
Motivation:
- The new network model poses challenges to the IR expression, optimization and execution of the deep learning framework, including the introduction of a new op abstraction level, and the dynamics of the model.
- The acceleration of third-party high-performance computing languages or frameworks urgently requires a more versatile and open tensor computing framework and API design.
- The technical challenges of unified optimization of the model layer and the operator layer, including hierarchical IR design, optimization of infrastructure, automatic tuning, loop optimization, etc.
- Differential equations are solved with a large number of differentials, which have high requirements for the differential expression of the framework, interface design, algorithm analysis efficiency and reliability.
Target:
Driven by cutting-edge applications, from the perspectives of new models, dynamic models, high-performance computing languages, etc., study the evolution direction and key technology paths of future computing frameworks. For example, it supports differentiable programming of high-order differentiation.
Method:
We expect the applicant can conduct tensor differentiable Computing framework research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support.
How To Join:
- Submit an issue/PR based on community discussion for consultation or claim on related topics
- Submit your proposal to us by email peng.yuanpeng@huawei.com