Skip to content

k11v/merch

Repository files navigation

Merch

Merch is an internal company store service where employees can purchase goods using coins. Each new employee is allocated 1000 coins, which can be used to buy merchandise. Additionally, coins can be transferred to other employees as a token of appreciation or as a gift.

Checklist

This service was implemented as part of a test assignment. The vast majority of the items from the checklist below were implemented within one week. The result of the completed work by the deadline is presented in the 0.1.0 tag.

After submitting the assignment, there was a desire and time to refactor, write tests, and generally complete the checklist. The result of this work block is presented in the 0.2.1 tag.

  • Programming Language: Go.
  • Database: PostgreSQL.
  • Compliance with the given OpenAPI specification.
  • Authorization with JWT tokens.
  • Coverage with unit tests.
  • Coverage with E2E tests.
  • Load testing conducted.
  • Configured golangci-lint.
  • Configured Docker and Docker Compose.

Install and Run

Manually

  1. Set environment variables.

    You will need to set up a PostgreSQL server as an external dependency and place the connection string in the APP_POSTGRES_URL variable.

    The JWT verification and signature keys are public and private ED25519 keys. If the specified key files do not exist, they will be automatically generated by the setup program in the next step.

    export APP_HOST="127.0.0.1"
    export APP_PORT="8080"
    export APP_POSTGRES_URL="postgres://postgres:postgres@localhost:5432/postgres?sslmode=disable"
    export APP_JWT_VERIFICATION_KEY_FILE=".app/jwt.pub.pem"
    export APP_JWT_SIGNATURE_KEY_FILE=".app/jwt.pem"
    export APPTEST_USER_FILE=".app/apptest/user.json"
    export APPTEST_USER_COUNT="10000"
    export APPTEST_AUTH_TOKEN_FILE=".app/apptest/auth_token.json"
  2. Set up the server environment.

    Program setup will migrate the database and generate the JWT verification and signature keys, if necessary. It is idempotent.

    go run ./cmd/setup -app
  3. Start the server.

    The service will be available at http://127.0.0.1:8080.

    go run ./cmd/server

Docker Compose

  1. Start the server and its dependencies.

    During startup, the setup program will also migrate the database and generate the JWT verification and signature keys.

    The service will be available at http://127.0.0.1:8080.

    docker compose up -d

Running E2E Tests

  1. If necessary, stop the server with its dependencies and delete all data.

    E2E tests are not idempotent, so there should be no old data before retesting.

    docker compose down -v
  2. Start the server and its dependencies.

    docker compose up -d
  3. Run the E2E tests.

    APPTEST_URL allows you to specify the address of the service to be tested.

    APPTEST_E2E disables the automatic skipping of E2E tests.

    -count=1 disables automatic test caching, which would prevent E2E tests from being rerun due to no changes in the source code.

    export APPTEST_URL="http://127.0.0.1:8080"
    export APPTEST_E2E=1
    go test -count=1 -v ./tests/e2e/...

Running Load Tests

  1. If necessary, stop the server with its dependencies and delete all data.

    Load tests are not idempotent, so there should be no old data before retesting.

    docker compose --profile test down -v
  2. Start the server and its dependencies with the test profile.

    The test profile will additionally run the setup -apptest command, which will populate the database with test data and create files with test users and authentication tokens.

    Authentication tokens have the usual lifespan (1 hour), so load testing should not be postponed.

    docker compose --profile test up -d
  3. Copy the files with test users and authentication tokens.

    mkdir -p .app/apptest
    docker compose cp server:/user/app/apptest/user.json .app/apptest
    docker compose cp server:/user/app/apptest/auth_token.json .app/apptest
  4. Run the load testing specifying the paths to the copied files.

    Paths to files must be specified absolutely.

    APPTEST_URL allows you to specify the address of the service to be tested.

    export APPTEST_URL="http://127.0.0.1:8080"
    export APPTEST_USER_FILE="$PWD/.app/apptest/user.json"
    export APPTEST_AUTH_TOKEN_FILE="$PWD/.app/apptest/auth_token.json"
    k6 run ./tests/load/server.js

    The obtained results can be compared with the results previously obtained on not-so-powerful hardware.

             /\      Grafana   /‾‾/
        /\  /  \     |\  __   /  /
       /  \/    \    | |/ /  /   ‾‾\
      /          \   |   (  |  (‾)  |
     / __________ \  |_|\_\  \_____/
    
         execution: local
            script: ./tests/load/server.js
            output: -
    
         scenarios: (100.00%) 1 scenario, 30 max VUs, 5m30s max duration (incl. graceful stop):
                  * default: Up to 30 looping VUs for 5m0s over 4 stages (gracefulRampDown: 30s, gracefulStop: 30s)
    
    
         ✓ 200 or 400 and not enough coin
         ✓ 200
    
         checks.........................: 100.00% 116469 out of 116469
         data_received..................: 23 MB   76 kB/s
         data_sent......................: 48 MB   159 kB/s
         http_req_blocked...............: avg=13.5µs  min=2µs     med=8µs     max=9.58ms   p(90)=26µs    p(95)=36µs
         http_req_connecting............: avg=290ns   min=0s      med=0s      max=7.85ms   p(90)=0s      p(95)=0s
       ✓ http_req_duration..............: avg=20.7ms  min=2.47ms  med=14.94ms max=442.86ms p(90)=43.48ms p(95)=50.82ms
           { expected_response:true }...: avg=20.7ms  min=2.47ms  med=14.94ms max=442.86ms p(90)=43.48ms p(95)=50.82ms
       ✓ http_req_failed................: 0.00%   0 out of 116469
         http_req_receiving.............: avg=97.08µs min=15µs    med=74µs    max=19.84ms  p(90)=144µs   p(95)=240µs
         http_req_sending...............: avg=68.45µs min=8µs     med=28µs    max=102.92ms p(90)=103µs   p(95)=180µs
         http_req_tls_handshaking.......: avg=0s      min=0s      med=0s      max=0s       p(90)=0s      p(95)=0s
         http_req_waiting...............: avg=20.53ms min=2.39ms  med=14.78ms max=442.8ms  p(90)=43.3ms  p(95)=50.63ms
         http_reqs......................: 116469  388.179964/s
         iteration_duration.............: avg=31.89ms min=12.89ms med=26.18ms max=453.26ms p(90)=54.72ms p(95)=62.06ms
         iterations.....................: 116469  388.179964/s
         vus............................: 19      min=0                max=30
         vus_max........................: 30      min=30               max=30
    
    
    running (5m00.0s), 00/30 VUs, 116469 complete and 0 interrupted iterations
    default ✓ [======================================] 00/30 VUs  5m0s
    

Decisions

Implementation of the HTTP server using oapi-codegen

oapi-codegen is a command-line tool and library for converting OpenAPI specifications into Go code, whether it's implementing a server or a client.

This tool was chosen because a ready-made OpenAPI specification was provided in the task, which could not be changed, and for which an HTTP server had to be implemented. oapi-codegen sped up the development of the server's presentation layer and allowed the server implementation to closely align with the given specification.

Password hashing using Argon2id

Argon2id is a version of the Argon2 algorithm, the winner of the 2015 Password Hashing Competition, and it is designed for reliable password hashing, providing protection against various attacks.

This algorithm was chosen based on the OWASP recommendation. Password hashing was implemented with only one external dependency on golang.org/x/crypto/argon2.

Using UUID for primary keys in the database

UUIDs as primary keys have many advantages, such as never needing to be changed, being easily generated by both the server and client, and allowing multiple tables to be merged into one seamlessly. This gives flexibility in using the database compared to alternatives (such as a serial id column or a username text column).

Organizing Packages by Domains

During development, most of the code was located in cmd/server/main.go to maintain flexibility and speed. Premature abstractions, such as a service layer and repository layer, were avoided to first implement the core functionality and write code that was most likely to remain in the project. Where abstractions could be beneficial, they were created.

Closer to the deadline, the main.go file was split into several files to make navigation and support easier. There was a desire, but not enough time, to move the service layer into separate packages.

After the deadline, several days were spent organizing the code and bringing it to the desired form. After refactoring, the HTTP handlers remained in the cmd/server package. They are responsible for receiving requests from the end user, passing them to the service layer with business logic, and forming responses for the end user.

The service layer is organized into multiple packages grouped by domains (per Tactical DDD). More specific domain packages are allowed to depend on more general domain packages (inspired by the Go standard library, for example, the net/http package depends on the net package). All service layer packages with a rough hierarchy:

It is worth noting that the internal/app package is not designed to depend on other packages. It is intended for any types, interfaces, and functions common to the entire service. When it is necessary to combine several domain packages, for example, to create an HTTP server, a separate package should be used. In this project, this responsibility lies with the cmd/server and cmd/setup packages.

About

Merch is a service for a company's internal merch store where employees can purchase items using coins. It was created as a test assignment in February 2025.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors