Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Empty file.
58 changes: 58 additions & 0 deletions 02_activities/assignments/DC_Cohort/Assignment2.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,9 +45,15 @@ There are several tools online you can use, I'd recommend [Draw.io](https://www.

**HINT:** You do not need to create any data for this prompt. This is a conceptual model only.



#### Prompt 2
We want to create employee shifts, splitting up the day into morning and evening. Add this to the ERD.





#### Prompt 3
The store wants to keep customer addresses. Propose two architectures for the CUSTOMER_ADDRESS table, one that will retain changes, and another that will overwrite. Which is type 1, which is type 2?

Expand All @@ -57,6 +63,47 @@ The store wants to keep customer addresses. Propose two architectures for the CU
Your answer...
```

#### Prompt 1

This ERD shows a small bookstore with key entities for employees, customers, books, orders, sales, and dates.
Orders link customers and employees, and each order can include multiple books through the sales table.
The date table supports both order tracking and employee hire dates for analysis.


#### Prompt 2


This ERD extends the base model with a shift system.
A new Shift table defines morning and evening shifts, and an Employee_Shift_Assignment table connects each employee to a shift and date.
This design allows flexible daily scheduling while keeping employee and shift data normalized.


#### Prompt 3

The store wants to keep customer addresses. There are two possible architectures depending on whether we overwrite or retain historical addresses.

Option 1 – Overwrite current address (Type 1 Slowly Changing Dimension)**
This design keeps only the customer’s current address. When a customer moves, the record is updated in place.
- Example table:
- `customer_id`, `street`, `city`, `region`, `postal_code`, `country`, `last_updated_at`
- When the address changes, the old one is overwritten.
- Pros: Simple, easy to query current address.
- Cons: No address history is kept.

Option 2 – Retain address history (Type 2 Slowly Changing Dimension)**
This design keeps every past address with start and end dates.
- Example table:
- `customer_address_id`, `customer_id`, `street`, `city`, `region`, `postal_code`, `country`,
`effective_start_date`, `effective_end_date` (NULL = current), `is_current`
- When a customer moves, we insert a new row and close the old one by setting its end date.
- Pros: Preserves full history for audits or time-based analysis.
- Cons: More complex queries and larger storage.

Summary:
Type 1 = overwrite (current-only)
Type 2 = retain history (time-stamped versions)


***

## Section 2:
Expand Down Expand Up @@ -185,3 +232,14 @@ Consider, for example, concepts of labour, bias, LLM proliferation, moderating c
```
Your thoughts...
```

Vicki Boykis’s essay “Neural nets are just people all the way down” really made me stop and think about how artificial intelligence is built. We often imagine AI as something mechanical, almost magical, running on pure code and computation. But what Boykis shows is that behind every “smart” system there are real people doing quiet, repetitive, and often underpaid work. It made me realize that AI is not really automated at all. It is powered by countless hours of human labour, judgment, and sometimes bias.

One of the main ethical issues in this story is the invisibility of that human labour. Boykis explains how early language and image databases like the Brown Corpus, WordNet, and ImageNet were all created by people who spent endless hours tagging and organizing data. Many of them were graduate students or crowdworkers paid only a few cents per task through platforms like Amazon Mechanical Turk. Their work makes modern AI possible, yet their names are rarely mentioned. I find this troubling, because it mirrors other kinds of global inequality where the people who build the foundations of a system are hidden and undervalued.

Another major issue is bias. Every dataset reflects the choices, culture, and limitations of the people who created it. Boykis points out how ImageNet once labeled people with words like “orphan” or “criminal,” showing how social stereotypes can end up inside the algorithms that shape our world. When these systems are used for things like facial recognition or automated decision making, those biases can turn into real harm. It made me think about how “objectivity” in technology is often an illusion, because humans define what the data means in the first place.

The essay also made me question the idea of automation itself. We often talk about AI as if it were independent, learning on its own. But Boykis shows that at every step, humans are still guiding, labeling, and correcting. Calling it “machine learning” hides who is actually doing the work and who is responsible when something goes wrong. Recognizing that there are people “all the way down” forces us to think about ethics not as a side topic, but as something built into the very structure of technology.

What stayed with me most is Boykis’s comparison to sewing. Just like sewing still relies on human intuition, AI also relies on the creativity and care of people. Her essay reminds me that progress in technology should never come at the cost of fairness or visibility. True innovation begins by valuing the human effort that makes it all possible.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
194 changes: 187 additions & 7 deletions 02_activities/assignments/DC_Cohort/assignment2.sql
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,25 @@ The `||` values concatenate the columns into strings.
Edit the appropriate columns -- you're making two edits -- and the NULL rows will be fixed.
All the other rows will remain the same. */

/* Find rows with NULLs (for sanity check) */
SELECT product_id, product_name, product_size, product_qty_type
FROM product
WHERE product_size IS NULL
OR product_qty_type IS NULL;

/* Replace NULLs in the concatenated label:
- product_size: blank when NULL
- product_qty_type: 'unit' when NULL
*/
SELECT
product_name
|| ', '
|| COALESCE(product_size, '')
|| ' ('
|| COALESCE(product_qty_type, 'unit')
|| ')' AS product_label
FROM product;



--Windowed Functions
Expand All @@ -34,17 +53,51 @@ each new market date for each customer, or select only the unique market dates p
HINT: One of these approaches uses ROW_NUMBER() and one uses DENSE_RANK(). */


SELECT
customer_id,
market_date,
DENSE_RANK() OVER (
PARTITION BY customer_id
ORDER BY market_date
) AS visit_number
FROM customer_purchases
ORDER BY customer_id, market_date;



/* 2. Reverse the numbering of the query from a part so each customer’s most recent visit is labeled 1,
then write another query that uses this one as a subquery (or temp table) and filters the results to
only the customer’s most recent visit. */


WITH numbered AS (
SELECT
customer_id,
market_date,
DENSE_RANK() OVER (
PARTITION BY customer_id
ORDER BY market_date DESC
) AS rev_visit_number
FROM (
SELECT DISTINCT customer_id, market_date
FROM customer_purchases
)
)
SELECT customer_id, market_date
FROM numbered
WHERE rev_visit_number = 1
ORDER BY customer_id;

/* 3. Using a COUNT() window function, include a value along with each row of the
customer_purchases table that indicates how many different times that customer has purchased that product_id. */


SELECT
customer_id,
product_id,
COUNT(DISTINCT market_date) AS times_customer_bought_product
FROM customer_purchases
GROUP BY customer_id, product_id
ORDER BY customer_id, product_id;

-- String manipulations
/* 1. Some product names in the product table have descriptions like "Jar" or "Organic".
Expand All @@ -58,10 +111,21 @@ Remove any trailing or leading whitespaces. Don't just use a case statement for

Hint: you might need to use INSTR(product_name,'-') to find the hyphens. INSTR will help split the column. */

SELECT
product_name,
CASE
WHEN INSTR(product_name, '-') > 0
THEN TRIM(SUBSTR(product_name, INSTR(product_name, '-') + 1))
ELSE NULL
END AS description
FROM product;


/* 2. Filter the query to show any product_size value that contain a number with REGEXP. */

SELECT *
FROM product
WHERE product_size REGEXP '[0-9]';


-- UNION
Expand All @@ -74,6 +138,37 @@ HINT: There are a possibly a few ways to do this query, but if you're struggling
3) Query the second temp table twice, once for the best day, once for the worst day,
with a UNION binding them. */

/* Highest and lowest total-sales market dates (with UNION) */
WITH daily AS (
/* 1) Total sales per date */
SELECT
market_date,
SUM(quantity * cost_to_customer_per_qty) AS total_sales
FROM customer_purchases
GROUP BY market_date
),
ranked AS (
/* 2) Rank best and worst by total_sales */
SELECT
market_date,
total_sales,
RANK() OVER (ORDER BY total_sales DESC) AS r_max,
RANK() OVER (ORDER BY total_sales ASC) AS r_min
FROM daily
)
/* 3) UNION best day(s) and worst day(s) */
SELECT market_date, total_sales, 'highest' AS kind
FROM ranked
WHERE r_max = 1

UNION ALL

SELECT market_date, total_sales, 'lowest' AS kind
FROM ranked
WHERE r_min = 1

ORDER BY kind, market_date;




Expand All @@ -90,6 +185,39 @@ Think a bit about the row counts: how many distinct vendors, product names are t
How many customers are there (y).
Before your final group by you should have the product of those two queries (x*y). */

/* Revenue per vendor per product if every customer buys 5 units */
WITH n_customers AS (
SELECT COUNT(*) AS n FROM customer
),
latest_price AS (
SELECT
vendor_id,
product_id,
original_price,
ROW_NUMBER() OVER (
PARTITION BY vendor_id, product_id
ORDER BY market_date DESC
) AS rn
FROM vendor_inventory
),
base AS (
SELECT
v.vendor_name,
p.product_name,
lp.original_price AS price_per_qty
FROM latest_price lp
JOIN vendor v USING (vendor_id)
JOIN product p USING (product_id)
WHERE lp.rn = 1 -- keep the latest price per vendor-product
)
SELECT
b.vendor_name,
b.product_name,
5 * n.n * b.price_per_qty AS hypothetical_revenue
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Number 5 is abstract number

FROM base b
CROSS JOIN n_customers n
ORDER BY b.vendor_name, b.product_name;



-- INSERT
Expand All @@ -99,25 +227,57 @@ It should use all of the columns from the product table, as well as a new column
Name the timestamp column `snapshot_timestamp`. */


DROP TABLE IF EXISTS product_units;

CREATE TABLE product_units AS
SELECT
*,
CURRENT_TIMESTAMP AS snapshot_timestamp
FROM product
WHERE product_qty_type = 'unit';


/*2. Using `INSERT`, add a new row to the product_units table (with an updated timestamp).
This can be any product you desire (e.g. add another record for Apple Pie). */


INSERT INTO product_units
SELECT
*,
CURRENT_TIMESTAMP
FROM product
WHERE product_name = 'Apple Pie' -- change if you want a different product
AND product_qty_type = 'unit'
LIMIT 1;

-- DELETE
/* 1. Delete the older record for the whatever product you added.

HINT: If you don't specify a WHERE clause, you are going to have a bad time.*/


DELETE FROM product_units
WHERE product_id = (
SELECT product_id
FROM product
WHERE product_name = 'Apple Pie'
LIMIT 1
)
AND snapshot_timestamp = (
SELECT MIN(snapshot_timestamp)
FROM product_units
WHERE product_id = (
SELECT product_id
FROM product
WHERE product_name = 'Apple Pie'
LIMIT 1
)
);

-- UPDATE
/* 1.We want to add the current_quantity to the product_units table.
First, add a new column, current_quantity to the table using the following syntax.

ALTER TABLE product_units
ADD current_quantity INT;
ADD COLUMN current_quantity INT;

Then, using UPDATE, change the current_quantity equal to the last quantity value from the vendor_inventory details.

Expand All @@ -129,6 +289,26 @@ Finally, make sure you have a WHERE statement to update the right row,
you'll need to use product_units.product_id to refer to the correct row within the product_units table.
When you have all of these components, you can run the update statement. */




WITH ranked AS (
SELECT
product_id,
quantity,
market_date,
ROW_NUMBER() OVER (
PARTITION BY product_id
ORDER BY market_date DESC
) AS rn
FROM vendor_inventory
),
last_qty AS (
SELECT product_id, quantity
FROM ranked
WHERE rn = 1
)
UPDATE product_units
SET current_quantity = COALESCE(
(SELECT lq.quantity
FROM last_qty lq
WHERE lq.product_id = product_units.product_id),
0
);