Skip to content
View sreejeetm1729's full-sized avatar
πŸ˜„
πŸ˜„

Block or report sreejeetm1729

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
sreejeetm1729/README.md

Hello! 🍁 I am a Ph.D. student at the Department of ECE, North Carolina State University. My current interests are in the broad aspects of Control Theory, Large Scale Machine Learning, Statistical Learning Theory, and Adversarial Reinforcement Learning. As a part of my research work, I also have relevant background in Probability Theory, Linear Algebra, Stochastic Optimization, Randomized Algorithms, and Robust Statistics. Previously, I have completed my M.Tech in Robotics and Autonomous Systems from Indian Institute of Science, Bangalore in 2023 and B.E in Electrical Engineering from Jadavpur University, Kolkata in 2021. During my master’s at IISc, I actively worked on perturbative networked systems, and perceptive algorithms for swarms.

Aside from my passion for mathematics, I also enjoy cooking, reading, writing poetry, and reflecting deeply.
Also, I love cats, and elephants. A lot. 🐘❀️😸

Pinned Loading

  1. Federated-MARL-Gym-Environment Federated-MARL-Gym-Environment Public

    π™΅πšŽπšπšŽπš›πšŠπšπšŽπš π™Όπ™°πšπ™»-π™Άπš’πš– : We introduce a custom multi-agent reinforcement learning environment built with Gymnasium and Pygame, designed for evaluating federated RL (FRL) algorithms. The environment mod…

    Jupyter Notebook

  2. Robust-Asynchronous-Q-Learning-with-Markovian-Data Robust-Asynchronous-Q-Learning-with-Markovian-Data Public

    Accepted at NeuRIPS 2025-Reliable ML WorkshopπŸŽ‰. πšπš˜πš‹πšžπšœπš π™°πšœπš’πš—πšŒ-πš€/π™°πšœπš’πš—πšŒ-πšπ™°πš€/𝙼: The first provably robust variants of asynchronous Q-learning that tolerates adversarially corrupted rewards. Our algorit…

    1

  3. Adversarially-Robust-TD-Learning-with-Markovian-Data Adversarially-Robust-TD-Learning-with-Markovian-Data Public

    Accepted at AISTATS 25πŸŽ‰. Policy evaluation estimates long-term returns in RL. Temporal Difference (TD) learning, a classic method with finite-time guarantees, assumes well-behaved rewards. But what…

    MATLAB

  4. Robust-Q-Learning-under-Corrupted-Rewards Robust-Q-Learning-under-Corrupted-Rewards Public

    Accepted at IEEE CDC 24 πŸŽ‰. We analyze Q-learning's robustness against strong-contamination attacks that disrupt reward signals. Our robust Q-learning algorithm employs historical data to create res…

    Jupyter Notebook 1 1

  5. Robust-Federated-Q-Learning-with-Almost-No-communication Robust-Federated-Q-Learning-with-Almost-No-communication Public

    πšπš˜πš‹πšžπšœπš π™΅πšŽπš-πš€ : A federated Q-learning algorithm that stays reliable even when a small fraction of agents are adversarial. It blends model-based/model-free updates with median-of-means to ensure (i)…

    Jupyter Notebook

  6. Q-Learning-over-Static-and-Time-Varying-Networks Q-Learning-over-Static-and-Time-Varying-Networks Public

    πš…πšπ™³πš€ : We propose and analyze a new algorithm that achieves collaborative speedups in sample complexity for Q-learning over static and time-varying networks.

    Jupyter Notebook 1