Research
Project
- Tight Bound of Gaussian Interpolators: ( l_1 ), ( l_2 ), and General Norm Jun 2024 - Present
Research Assistant Advisor: Frederic Koehler- Investigating possible overfitting phenomena in SURE estimators through extensive simulations.
- Applying ( l_1 ) norm’s tight bound technique developed by G. Wang et al. to general norm interpolators.
- Conducting a thorough literature review on CGMT, uniform convergence, and effective rank.
- Parallel Inference for Quantile Regression Using Stochastic Subgradient Descent Jun 2024 - Present
Master’s Thesis paper Advisor: Prof. Wei Biao Wu- Introduced a parallel inference framework for quantile regression using stochastic subgradient descent.
- The method addresses large-scale quantile regression by distributing computations across multiple processors and constructs confidence intervals for coefficients through averaging parallel runs, leveraging asymptotic theory.
- Detoxification: Self Supervisor, External Monitor, and Adversarial Trained System Prompt Mar 2024 - Jun 2024
Course Project
paper Advisor: Prof. Bo Li- Developed three methods for large language models (LLMs) to prevent toxic content generation: Self Supervisor, External Monitor, and Adversarial Trained System Prompt.
- Testing on the global dataset demonstrated the effectiveness of our methods in significantly reducing toxic outputs across various LLMs, including GPT-3.5-turbo, GPT-4, Llama-3-8b, and Vicuna-1.5-7b.
- Doubly Debiased Lasso in Partially Linear Model Under Hidden Confounding Sep 2022 - May 2023
Undergraduate Dissertation paper(Chinese) Advisor: Prof. Gaorong Li- Combined the Doubly Debiased Lasso and Partially Linear Models to extend the former model to the nonlinear situation.
- The new model has both linear and nonlinear parts and can simultaneously remove the bias caused by Lasso in high-dimensional cases and hidden confounding of the data itself.