The repo consists of four aws lambdas, and the architecture graph above shows how they interact with each other.
The lambdas compose the heart of the system. They mainly contribute to two features:
- judge the users' status by their photos and screenshots
- extract discussion topics and incorporate them with replies
The deployments are a bit tricky, please refer to the following sections to find how to deploy each lambda and what they exactly did.
Note: Most of the lambda layers are from Klayers.
focus_judger is triggered by an API gateway, and the POST request sent to the API gateway needs to contain a json body with the following parameters
{
'username': string,
'photo': {
'bucket': string,
'key': string
},
'screenshot': {
'bucket': string,
'key': string
}
}Once focus_judger receives the request, it would use aws rekognition to judge the user's status is working or lazy. Then it would publish the status data to the database of the frontend.
If the condition is staisfied, focus_judger would publish a canned message the line group and publish a message to the MQTT to trigger the IoT device.
- Python 3.8
- cloudwatch
- rekognition
- dynamoDB
- API Gateway
| name | Version ARN |
|---|---|
| Klayers-python38-requests | arn:aws:lambda:us-east-1:770693421928:layer:Klayers-python38-requests:17 |
| Klayers-python38-numpy | arn:aws:lambda:us-east-1:770693421928:layer:Klayers-python38-numpy:17 |
| Klayers-python38-pytz | arn:aws:lambda:us-east-1:770693421928:layer:Klayers-python38-pytz:5 |
| linebot-sdk | create by yourself |
record_handler is triggered by an S3 object create event, and the object must be a wav file.
Once record_handler receives the event, it would use aws transcribe to transcribe the wav file into texts, and generate some relevant data:
- username
- date: the date of the event creation in the Asia/Taipei time zone (e.g., 2021-06-23)
- time: the time (24-hour clock) of the event creation in the Asia/Taipei time zone (e.g., 08:30:00)
- texts: transcription of the wav file
The relevant data would be published to the SQS queue to trigger topic_extractor.
- Python 3.8
- cloudwatch
- transcribe
- sqs
- S3 object create event
| name | Version ARN |
|---|---|
| Klayers-python38-pytz | arn:aws:lambda:us-east-1:770693421928:layer:Klayers-python38-pytz:5 |
| python-opencc | create by yourself |
topic_extractor is triggered by an SQS event, the message must have a json body:
{
'username': string,
'date': string,
'time': string,
'texts': sring
}Once topic_extractor receives the SQS event, it would extract the segment describing the discussion topic from the texts provided by the SQS message body.
Then the segment, or named topic, would be published to the database of the frontend and the SNS topic to notify all the members via email.
- Python 3.8
- cloudwatch
- dynamoDB
- sns
- SQS
| name | Version ARN |
|---|---|
| Klayers-python38-requests | arn:aws:lambda:us-east-1:770693421928:layer:Klayers-python38-requests:17 |
| linebot-sdk | create by yourself |
reply_catcher is triggered by an API gateway. Whenever the LINE groups in which the LINE bot participated receive replies, LINE would call the API to pass the replies.
Once reply_catcher receives a reply, it would keep the reply if it comes from the discussion group. Then the reply with other informations would be published to the SQS queue to ask the worker perform the matching process.
- Python 3.8
- cloudwatch
- sqs
- API Gateway
| name | Version ARN |
|---|---|
| Klayers-python38-pytz | arn:aws:lambda:us-east-1:770693421928:layer:Klayers-python38-pytz:5 |