Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 10 additions & 2 deletions 02_activities/assignments/DC_Cohort/Assignment2.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,8 @@ The store wants to keep customer addresses. Propose two architectures for the CU
**HINT:** search type 1 vs type 2 slowly changing dimensions.

```
Your answer...
Type 1 address design overwrites the existing address every time a change has been mae, so only the curent address is kept. No history is preserved.
Type 2 address design keeps all addresses that were inputted and tracks which one is current at a given time.
```

***
Expand Down Expand Up @@ -191,5 +192,12 @@ Consider, for example, concepts of labour, bias, LLM proliferation, moderating c


```
Your thoughts...
Boykis’s article describes how neural networks are built on layers of human decisions, manual labor, and prior classification systems. Her central argument is that what appears to be automated intelligence is actually grounded in human labor and choices “all the way down,” from data labeling to taxonomy creation to category revision. This matters for health care quality research because many current studies increasingly rely on algorithmic tools to assess, predict, or improve care, often without fully accounting for the human judgments embedded in those systems.

Boykis explains that training datasets such as ImageNet were not simply produced by machines; they were assembled through massive human effort, including workers on Amazon Mechanical Turk manually identifying images (and likely being inadequately compensated). Even deeper than that, the labels used in ImageNet depended on earlier linguistic systems such as WordNet, which were themselves built through human categorization, interpretation, and manual compilation. This layered history suggests that machine learning systems inherit the assumptions and limitations of the people who define categories, collect examples, and decide what counts as correct. Applied to health care quality, this means that algorithmic measures of patient risk, diagnostic classification, or service quality may reflect prior human judgments rather than purely objective reality or accounting for existing inequities.

The article also implies that classification itself is a quality issue. Boykis draws attention to the fact that categories are never natural or self-evident; they are socially constructed. Her discussion of WordNet synsets and ImageNet labels shows that every taxonomy depends on someone deciding what belongs together, what distinctions matter, and which definitions should be used. In health care quality research, this is highly relevant because quality measurement depends on similar classificatory decisions: what counts as a good outcome, which patient experiences are recorded, how adverse events are defined (and in many cases, the omission of near-misses, which are critical opportunities for learning), and whose perspectives are prioritized. If these categories are poorly designed or carry hidden assumptions, then the research built on them may reproduce distortions rather than reveal actual quality problems.

Overall, the article can be applied to healthcare quality research in that AI should be considered as as a sociotechnical system rather than an autonomous machine. Major implications include developing/repairing algorithmic tools in health care by fully understanding and improving human decisions, classifications, and data practices on which those tools depend.

```
Binary file not shown.
Binary file not shown.
147 changes: 134 additions & 13 deletions 02_activities/assignments/DC_Cohort/assignment2.sql
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,11 @@ Edit the appropriate columns -- you're making two edits -- and the NULL rows wil
All the other rows will remain the same. */
--QUERY 1



SELECT
product_name || ', ' ||
COALESCE(product_size, '') || ' (' ||
COALESCE(product_qty_type, 'unit') || ')'
FROM product;

--END QUERY

Expand All @@ -40,7 +43,15 @@ each new market date for each customer, or select only the unique market dates p
HINT: One of these approaches uses ROW_NUMBER() and one uses DENSE_RANK().
Filter the visits to dates before April 29, 2022. */
--QUERY 2

SELECT
customer_id,
market_date,
DENSE_RANK() OVER (
PARTITION BY customer_id
ORDER BY market_date
) AS visit_number
FROM customer_purchases
WHERE market_date < '2022-04-29';



Expand All @@ -52,8 +63,18 @@ then write another query that uses this one as a subquery (or temp table) and fi
only the customer’s most recent visit.
HINT: Do not use the previous visit dates filter. */
--QUERY 3


SELECT *
FROM (
SELECT
customer_id,
market_date,
DENSE_RANK() OVER (
PARTITION BY customer_id
ORDER BY market_date DESC
) AS visit_number
FROM customer_purchases
) t
WHERE visit_number = 1;


--END QUERY
Expand All @@ -62,10 +83,19 @@ HINT: Do not use the previous visit dates filter. */
/* 3. Using a COUNT() window function, include a value along with each row of the
customer_purchases table that indicates how many different times that customer has purchased that product_id.


You can make this a running count by including an ORDER BY within the PARTITION BY if desired.
Filter the visits to dates before April 29, 2022. */
--QUERY 4

SELECT
customer_id,
product_id,
market_date,
COUNT(*) OVER (
PARTITION BY customer_id, product_id
) AS purchase_count
FROM customer_purchases
WHERE market_date < '2022-04-29';



Expand All @@ -84,6 +114,15 @@ Remove any trailing or leading whitespaces. Don't just use a case statement for

Hint: you might need to use INSTR(product_name,'-') to find the hyphens. INSTR will help split the column. */
--QUERY 5
SELECT
product_name,
CASE
WHEN INSTR(product_name, '-') > 0
THEN TRIM(SUBSTR(product_name, INSTR(product_name, '-') + 1))
ELSE NULL
END AS description
FROM product;




Expand All @@ -93,7 +132,9 @@ Hint: you might need to use INSTR(product_name,'-') to find the hyphens. INSTR w

/* 2. Filter the query to show any product_size value that contain a number with REGEXP. */
--QUERY 6

SELECT *
FROM product
WHERE product_size REGEXP '[0-9]';



Expand All @@ -111,7 +152,30 @@ HINT: There are a possibly a few ways to do this query, but if you're struggling
with a UNION binding them. */
--QUERY 7


WITH daily_sales AS (
SELECT
market_date,
SUM(quantity * cost_to_customer_per_qty) AS total_sales
FROM customer_purchases
GROUP BY market_date
),
ranked_days AS (
SELECT
market_date,
total_sales,
RANK() OVER (ORDER BY total_sales DESC) AS best_rank,
RANK() OVER (ORDER BY total_sales ASC) AS worst_rank
FROM daily_sales
)
SELECT market_date, total_sales, 'highest' AS day_type
FROM ranked_days
WHERE best_rank = 1

UNION

SELECT market_date, total_sales, 'lowest' AS day_type
FROM ranked_days
WHERE worst_rank = 1;


--END QUERY
Expand All @@ -131,7 +195,27 @@ Think a bit about the row counts: how many distinct vendors, product names are t
How many customers are there (y).
Before your final group by you should have the product of those two queries (x*y). */
--QUERY 8

SELECT
vp.vendor_name,
vp.product_name,
SUM(5 * vp.original_price) AS total_revenue
FROM (
SELECT DISTINCT
vi.vendor_id,
v.vendor_name,
vi.product_id,
p.product_name,
vi.original_price
FROM vendor_inventory vi
JOIN vendor v
ON vi.vendor_id = v.vendor_id
JOIN product p
ON vi.product_id = p.product_id
) vp
CROSS JOIN customer c
GROUP BY
vp.vendor_name,
vp.product_name;



Expand All @@ -144,8 +228,14 @@ This table will contain only products where the `product_qty_type = 'unit'`.
It should use all of the columns from the product table, as well as a new column for the `CURRENT_TIMESTAMP`.
Name the timestamp column `snapshot_timestamp`. */
--QUERY 9
DROP TABLE IF EXISTS product_units;


CREATE TABLE product_units AS
SELECT
product.*,
CURRENT_TIMESTAMP AS snapshot_timestamp
FROM product
WHERE product_qty_type = 'unit';


--END QUERY
Expand All @@ -154,7 +244,22 @@ Name the timestamp column `snapshot_timestamp`. */
/*2. Using `INSERT`, add a new row to the product_units table (with an updated timestamp).
This can be any product you desire (e.g. add another record for Apple Pie). */
--QUERY 10

INSERT INTO product_units (
product_id,
product_name,
product_size,
product_category_id,
product_qty_type,
snapshot_timestamp
)
VALUES (
24,
'Smores Cookies',
'3',
6,
'unit',
CURRENT_TIMESTAMP
);



Expand All @@ -166,7 +271,13 @@ This can be any product you desire (e.g. add another record for Apple Pie). */

HINT: If you don't specify a WHERE clause, you are going to have a bad time.*/
--QUERY 11

DELETE FROM product_units
WHERE product_name = 'Smores Cookies'
AND snapshot_timestamp = (
SELECT MIN(snapshot_timestamp)
FROM product_units
WHERE product_name = 'Smores Cookies'
);



Expand All @@ -190,7 +301,17 @@ Finally, make sure you have a WHERE statement to update the right row,
you'll need to use product_units.product_id to refer to the correct row within the product_units table.
When you have all of these components, you can run the update statement. */
--QUERY 12

ALTER TABLE product_units
ADD COLUMN current_quantity INT;

UPDATE product_units
SET current_quantity = (
SELECT vi.quantity
FROM vendor_inventory vi
WHERE vi.product_id = product_units.product_id
ORDER BY vi.market_date DESC
LIMIT 1
);



Expand Down
Binary file modified 05_src/sql/farmersmarket.db
Binary file not shown.
Loading