Skip to content
View l1cacheDell's full-sized avatar
  • Baidu Inc.
  • Beijing

Block or report l1cacheDell

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
l1cacheDell/README.md

Profile

HPC dev, web3 dev, CUDA dev.

Intro

Use C++/CUDA/Python programming, for:

  • CUDA low-level operators dev (general cuda-core kernels, cuBLAS, wmma, PTX)
  • LLM inference system, especially hpc operator support.

Currently High Performance Computing interning in the Paddle R&D team at Baidu.

Pinned Loading

  1. triton-inference-server/vllm_backend triton-inference-server/vllm_backend Public

    Python 208 20

  2. bupt-hotel-management bupt-hotel-management Public

    Hotel management system, scheduling with RR algorithm on wind speed, support Python, Golang backends, Software Engineering, BUPT.

    Vue 19 1