Skip to content

codex-exe/PPO-sleep-signature

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PPO-sleep-signature

Sleep plays a critical role in our overall health and well-being. But many people struggle with poor sleep quality and quantity, which can negatively impact their physical and mental health, as well as their daily functioning and quality of life. Improving sleep-related behaviors, such as bedtime, caffeine intake, and exposure to light, can help to improve sleep quality. However, determining the optimal sleep-related actions can be challenging, as different individuals have different sleep requirements and preferences.

  • We took up the Multilevel Monitoring of Activity and Sleep in Healthy People (MMASH) dataset for the project, which offered psychological data (such as anxiety status, stress events, and emotions) for 22 healthy participants as well as 24-hour continuous beat-to-beat heart data, triaxial accelerometer data, sleep quality, physical activity, and sleep duration.

  • We used the Proximal Policy Optimization (PPO) algorithm to optimize each user’s sleep using the MMASH dataset.

What is PPO?

  • PPO is a policy-based reinforcement learning algorithm, which means that it is well-suited for problems where the optimal action is not known a priori. In the case of sleep optimization, the optimal sleep schedule is not known, and the algorithm must learn it through trial and error.

  • PPO is designed to handle continuous action spaces, which is important for sleep optimization since the action space consists of continuous variables such as the time to go to bed and wake up.

  • PPO is an on-policy algorithm, which means that it learns from the data that it generates during training. This is important for sleep optimization since the algorithm needs to explore the sleep schedule space and generate new data in order to learn the optimal sleep schedule.

  • PPO is an on-policy algorithm, which means that it learns from the data that it generates during training. This is important for sleep optimization since the algorithm needs to explore the sleep schedule space and generate new data in order to learn the optimal sleep schedule.

Contributers:

  • Alabhya Sharma
  • Ritika Lakshminarayanan

Colab link: https://colab.research.google.com/drive/1933n5ZR5EPRYC4kyUX1cXXUpVtxqxqFN?usp=sharing

Link to the original dataset: https://physionet.org/content/mmash/1.0.0/

Architecture:

architecture

Result Graphs:

result1 result2

Results Obtained:

  • Valuable insights into the factors that impact sleep quality.
  • A model that suggests sleep-related actions based on an individual's sleep data, resulting in higher reward and improved sleep quality.
  • Additionally, the model has been trained with respect to the effects, actions are having on an individual's sleep, making it a versatile approach to pinpoint those actions.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors