Skip to content

Latest commit

 

History

History
1313 lines (1018 loc) · 32.3 KB

File metadata and controls

1313 lines (1018 loc) · 32.3 KB

Procman E2E Test Cases

This document provides comprehensive end-to-end test cases for the procman daemon capabilities, including both manual test steps and automated test scripts.

Test Environment Setup

Prerequisites

  • Go 1.23 or later installed
  • curl or wget for HTTP testing
  • netstat or ss for port checking
  • ps or tasklist for process checking

Test Configuration

# Set up test environment
export PROCMAN_TEST_MODE=1
export PROCMAN_DATA_DIR=/tmp/procman-test
export PROCMAN_PID_FILE=/tmp/procman-test/procman.pid
export PROCMAN_LOG_FILE=/tmp/procman-test/procman.log
export PROCMAN_PORT=18080

# Create test directories
mkdir -p /tmp/procman-test/{data,logs,sessions}

1. Daemon Lifecycle Tests

1.1 Start Daemon Tests

Test 1.1.1: Basic Daemon Start

Objective: Verify daemon starts successfully with default configuration

Manual Steps:

  1. Build the procman binary: go build -o procman ./cmd/procman
  2. Start daemon: ./procman start
  3. Verify daemon is running: ./procman status
  4. Check PID file exists: cat /tmp/procman-test/procman.pid
  5. Verify process is running: ps aux | grep procman
  6. Stop daemon: ./procman stop

Expected Results:

  • Daemon starts without errors
  • Status shows "Daemon is running with PID "
  • PID file contains valid process ID
  • Process is running in background
  • Daemon stops cleanly

Test 1.1.2: Daemon Start with Custom Port

Objective: Verify daemon starts with custom port configuration

Manual Steps:

  1. Start daemon with custom port: ./procman -port 19090 start
  2. Check status: ./procman status
  3. Verify port is listening: netstat -tlnp | grep 19090
  4. Test HTTP endpoint: curl http://localhost:19090/
  5. Stop daemon: ./procman stop

Expected Results:

  • Daemon starts on port 19090
  • Port 19090 is listening
  • HTTP endpoint responds correctly

Test 1.1.3: Daemon Start with Custom PID File

Objective: Verify daemon starts with custom PID file location

Manual Steps:

  1. Start daemon with custom PID file: ./procman -pid-file /tmp/custom-procman.pid start
  2. Verify PID file exists: cat /tmp/custom-procman.pid
  3. Check status using custom PID file: ./procman -pid-file /tmp/custom-procman.pid status
  4. Stop daemon: ./procman -pid-file /tmp/custom-procman.pid stop

Expected Results:

  • Daemon creates PID file at custom location
  • Status command works with custom PID file
  • Daemon stops cleanly

1.2 Stop Daemon Tests

Test 1.2.1: Normal Daemon Stop

Objective: Verify daemon stops cleanly

Manual Steps:

  1. Start daemon: ./procman start
  2. Wait 2 seconds
  3. Stop daemon: ./procman stop
  4. Verify status: ./procman status
  5. Check PID file: ls -la /tmp/procman-test/procman.pid

Expected Results:

  • Daemon stops without errors
  • Status shows "Daemon is not running"
  • PID file is removed

Test 1.2.2: Stop Non-Running Daemon

Objective: Verify graceful handling when stopping non-running daemon

Manual Steps:

  1. Ensure daemon is not running: ./procman stop (if running)
  2. Try to stop daemon: ./procman stop
  3. Check the output

Expected Results:

  • Error message indicating daemon is not running
  • No crash or panic

1.3 Status Command Tests

Test 1.3.1: Status of Running Daemon

Objective: Verify status command correctly identifies running daemon

Manual Steps:

  1. Start daemon: ./procman start
  2. Check status: ./procman status
  3. Verify PID is correct: cat /tmp/procman-test/procman.pid
  4. Stop daemon

Expected Results:

  • Status shows "Daemon is running with PID "
  • PID matches actual process

Test 1.3.2: Status of Stopped Daemon

Objective: Verify status command correctly identifies stopped daemon

Manual Steps:

  1. Ensure daemon is not running
  2. Check status: ./procman status

Expected Results:

  • Status shows "Daemon is not running"

Test 1.3.3: Status with Missing PID File

Objective: Verify status command handles missing PID file

Manual Steps:

  1. Remove PID file: rm -f /tmp/procman-test/procman.pid
  2. Check status: ./procman status

Expected Results:

  • Status shows "Daemon is not running"
  • No error or crash

1.4 Run Command Tests

Test 1.4.1: Run in Foreground

Objective: Verify foreground mode works correctly

Manual Steps:

  1. Start in foreground: ./procman run
  2. In another terminal, test HTTP endpoint: curl http://localhost:8080/
  3. Send SIGTERM (Ctrl+C) to stop

Expected Results:

  • Daemon runs in foreground
  • HTTP server responds
  • Clean shutdown on SIGTERM

2. HTTP API Endpoint Tests

2.1 Root Endpoint Tests

Test 2.1.1: GET / - Basic Response

Objective: Verify root endpoint responds correctly

Manual Steps:

  1. Start daemon: ./procman start
  2. Test root endpoint: curl -v http://localhost:8080/
  3. Check response headers and body
  4. Stop daemon

Expected Results:

  • HTTP 200 OK
  • Content-Type: text/plain; charset=utf-8
  • Body: "Hello World"

Test 2.1.2: GET / - Concurrent Requests

Objective: Verify endpoint handles concurrent requests

Manual Steps:

  1. Start daemon: ./procman start
  2. Send multiple concurrent requests:
    for i in {1..10}; do
      curl -s http://localhost:8080/ &
    done
    wait
  3. Stop daemon

Expected Results:

  • All requests respond correctly
  • No connection refused errors

2.2 Health Endpoint Tests

Test 2.2.1: GET /health - Basic Response

Objective: Verify health endpoint responds correctly

Manual Steps:

  1. Start daemon: ./procman start
  2. Test health endpoint: curl -v http://localhost:8080/health
  3. Check response headers and body
  4. Stop daemon

Expected Results:

  • HTTP 200 OK
  • Content-Type: application/json
  • Body: {"status": "healthy"}

Test 2.2.2: GET /health - JSON Validation

Objective: Verify health endpoint returns valid JSON

Manual Steps:

  1. Start daemon: ./procman start
  2. Test health endpoint: curl -s http://localhost:8080/health | python3 -m json.tool
  3. Stop daemon

Expected Results:

  • JSON is valid and properly formatted
  • Contains "status" field with value "healthy"

2.3 Invalid Endpoint Tests

Test 2.3.1: GET /invalid - 404 Response

Objective: Verify invalid endpoints return 404

Manual Steps:

  1. Start daemon: ./procman start
  2. Test invalid endpoint: curl -v http://localhost:8080/invalid
  3. Stop daemon

Expected Results:

  • HTTP 404 Not Found
  • Error message in body

3. Configuration Validation Tests

3.1 Environment Variable Tests

Test 3.1.1: PROCMAN_PORT Override

Objective: Verify environment variable overrides default port

Manual Steps:

  1. Set environment variable: export PROCMAN_PORT=19090
  2. Start daemon: ./procman start
  3. Test endpoint: curl http://localhost:19090/
  4. Stop daemon
  5. Unset environment variable

Expected Results:

  • Daemon starts on port 19090
  • HTTP server responds on correct port

Test 3.1.2: PROCMAN_PID_FILE Override

Objective: Verify environment variable overrides default PID file

Manual Steps:

  1. Set environment variable: export PROCMAN_PID_FILE=/tmp/env-procman.pid
  2. Start daemon: ./procman start
  3. Check PID file: cat /tmp/env-procman.pid
  4. Stop daemon
  5. Unset environment variable

Expected Results:

  • PID file created at specified location
  • Daemon uses custom PID file path

Test 3.1.3: PROCMAN_LOG_FILE Override

Objective: Verify environment variable overrides default log file

Manual Steps:

  1. Set environment variable: export PROCMAN_LOG_FILE=/tmp/env-procman.log
  2. Start daemon: ./procman start
  3. Check log file: ls -la /tmp/env-procman.log
  4. Stop daemon
  5. Check log contents: cat /tmp/env-procman.log
  6. Unset environment variable

Expected Results:

  • Log file created at specified location
  • Log messages written to file

3.2 Command Line Override Tests

Test 3.2.1: Command Line Port Override

Objective: Verify command line flags override environment variables

Manual Steps:

  1. Set environment variable: export PROCMAN_PORT=19090
  2. Start with different port: ./procman -port 20000 start
  3. Test endpoint: curl http://localhost:20000/
  4. Stop daemon
  5. Unset environment variable

Expected Results:

  • Daemon starts on port 20000 (command line override)
  • Port 19090 is not used

3.3 Invalid Configuration Tests

Test 3.3.1: Invalid Port Number

Objective: Verify graceful handling of invalid port

Manual Steps:

  1. Try to start with invalid port: ./procman -port 99999 start
  2. Check error message

Expected Results:

  • Error message about invalid port
  • Daemon does not start

Test 3.3.2: Invalid PID File Path

Objective: Verify graceful handling of invalid PID file path

Manual Steps:

  1. Try to start with invalid PID file: ./procman -pid-file /invalid/path/procman.pid start
  2. Check error message

Expected Results:

  • Error message about invalid path
  • Daemon does not start

4. Error Handling Tests

4.1 Port Conflict Tests

Test 4.1.1: Port Already in Use

Objective: Verify graceful handling when port is already in use

Manual Steps:

  1. Start a simple HTTP server on port 8080:
    python3 -m http.server 8080 &
    PYTHON_PID=$!
  2. Try to start procman: ./procman start
  3. Check error message
  4. Stop Python server: kill $PYTHON_PID

Expected Results:

  • Error message about port being in use
  • Daemon does not start
  • No crash or panic

4.2 Permission Tests

Test 4.2.1: Invalid PID File Permissions

Objective: Verify graceful handling of permission issues

Manual Steps:

  1. Create directory with restricted permissions:
    sudo mkdir -p /root/procman-test
    sudo chmod 700 /root/procman-test
  2. Try to start with restricted PID file: ./procman -pid-file /root/procman-test/procman.pid start
  3. Check error message

Expected Results:

  • Error message about permission denied
  • Daemon does not start

4.3 Signal Handling Tests

Test 4.3.1: SIGTERM Handling

Objective: Verify graceful shutdown on SIGTERM

Manual Steps:

  1. Start daemon: ./procman start
  2. Get PID: cat /tmp/procman-test/procman.pid
  3. Send SIGTERM: kill -TERM <pid>
  4. Check status: ./procman status
  5. Check log file for shutdown message

Expected Results:

  • Daemon shuts down gracefully
  • PID file is removed
  • Log contains shutdown message

Test 4.3.2: SIGINT Handling

Objective: Verify graceful shutdown on SIGINT

Manual Steps:

  1. Start daemon: ./procman start
  2. Get PID: cat /tmp/procman-test/procman.pid
  3. Send SIGINT: kill -INT <pid>
  4. Check status: ./procman status

Expected Results:

  • Daemon shuts down gracefully
  • PID file is removed

5. Cross-Platform Compatibility Tests

5.1 Linux-Specific Tests

Test 5.1.1: Process Forking

Objective: Verify daemon forking works correctly on Linux

Manual Steps:

  1. Start daemon: ./procman start
  2. Check parent process: ps aux | grep procman
  3. Verify child process is running
  4. Stop daemon

Expected Results:

  • Parent process exits
  • Child process continues running
  • Proper daemonization

Test 5.1.2: Signal Handling

Objective: Verify Unix signal handling works correctly

Manual Steps:

  1. Start daemon: ./procman start
  2. Get PID: cat /tmp/procman-test/procman.pid
  3. Send various signals:
    kill -HUP <pid>   # SIGHUP
    kill -USR1 <pid>  # SIGUSR1
  4. Check daemon status: ./procman status
  5. Stop daemon: ./procman stop

Expected Results:

  • Daemon handles signals gracefully
  • No unexpected termination

5.2 Windows-Specific Tests

Test 5.2.1: Windows Service Compatibility

Objective: Verify basic Windows compatibility

Manual Steps:

  1. Build for Windows: GOOS=windows GOARCH=amd64 go build -o procman.exe ./cmd/procman
  2. Run basic commands:
    ./procman.exe --help
    ./procman.exe run
  3. Test in Windows environment

Expected Results:

  • Binary builds successfully
  • Basic commands work on Windows
  • No platform-specific crashes

5.3 macOS-Specific Tests

Test 5.3.1: macOS Daemon Compatibility

Objective: Verify macOS compatibility

Manual Steps:

  1. Build for macOS: GOOS=darwin GOARCH=amd64 go build -o procman-macos ./cmd/procman
  2. Test basic functionality on macOS
  3. Verify signal handling

Expected Results:

  • Binary builds successfully
  • Daemon works correctly on macOS
  • Proper signal handling

6. Performance and Stress Tests

6.1 Concurrent Request Tests

Test 6.1.1: High Concurrency

Objective: Verify daemon handles high concurrent load

Manual Steps:

  1. Start daemon: ./procman start
  2. Send 1000 concurrent requests:
    for i in {1..1000}; do
      curl -s http://localhost:8080/ &
    done
    wait
  3. Stop daemon

Expected Results:

  • All requests complete successfully
  • No daemon crashes
  • Reasonable response times

6.2 Long-Running Tests

Test 6.2.1: 24-Hour Stability

Objective: Verify daemon stability over long periods

Manual Steps:

  1. Start daemon: ./procman start
  2. Let it run for 24 hours
  3. Periodically check status: ./procman status
  4. Periodically test HTTP endpoint: curl http://localhost:8080/health
  5. After 24 hours, stop daemon: ./procman stop

Expected Results:

  • Daemon remains stable
  • No memory leaks
  • Consistent HTTP responses

7. Security Tests

7.1 Input Validation Tests

Test 7.1.1: Malformed HTTP Requests

Objective: Verify daemon handles malformed HTTP requests

Manual Steps:

  1. Start daemon: ./procman start
  2. Send malformed requests:
    echo "INVALID HTTP" | nc localhost 8080
    curl -X INVALID http://localhost:8080/
  3. Stop daemon

Expected Results:

  • Daemon handles malformed requests gracefully
  • No crashes or panics
  • Appropriate error responses

7.2 Resource Limit Tests

Test 7.2.1: Memory Usage

Objective: Verify daemon memory usage is reasonable

Manual Steps:

  1. Start daemon: ./procman start
  2. Monitor memory usage: ps aux | grep procman
  3. Send many requests and monitor memory
  4. Stop daemon

Expected Results:

  • Memory usage is reasonable
  • No memory leaks detected

Automated Test Scripts

test-e2e.sh

#!/bin/bash

# Comprehensive E2E test script for procman daemon

set -e

# Configuration
TEST_PORT=18080
TEST_PID_FILE="/tmp/procman-test/procman.pid"
TEST_LOG_FILE="/tmp/procman-test/procman.log"
PROCMAN_BIN="./procman"

# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color

# Test counter
TOTAL_TESTS=0
PASSED_TESTS=0
FAILED_TESTS=0

# Setup
setup() {
    echo -e "${YELLOW}Setting up test environment...${NC}"
    
    # Build procman
    go build -o "$PROCMAN_BIN" ./cmd/procman
    
    # Create test directories
    mkdir -p /tmp/procman-test/{data,logs,sessions}
    
    # Set environment variables
    export PROCMAN_PORT=$TEST_PORT
    export PROCMAN_PID_FILE="$TEST_PID_FILE"
    export PROCMAN_LOG_FILE="$TEST_LOG_FILE"
    export PROCMAN_DATA_DIR="/tmp/procman-test/data"
    
    # Ensure daemon is stopped
    stop_daemon || true
    cleanup_test_files
}

# Cleanup
cleanup() {
    echo -e "${YELLOW}Cleaning up...${NC}"
    stop_daemon || true
    cleanup_test_files
    rm -f "$PROCMAN_BIN"
}

# Cleanup test files
cleanup_test_files() {
    rm -f "$TEST_PID_FILE" "$TEST_LOG_FILE"
    rm -rf /tmp/procman-test
}

# Helper functions
start_daemon() {
    echo "Starting daemon..."
    "$PROCMAN_BIN" start > /dev/null 2>&1
    sleep 2  # Wait for daemon to start
}

stop_daemon() {
    echo "Stopping daemon..."
    "$PROCMAN_BIN" stop > /dev/null 2>&1 || true
    sleep 1
}

daemon_status() {
    "$PROCMAN_BIN" status 2>/dev/null || echo "Daemon is not running"
}

check_http_endpoint() {
    local url=$1
    local expected_status=$2
    local expected_body=$3
    
    local response=$(curl -s -w "%{http_code}" "$url")
    local status_code=${response: -3}
    local body=${response%???}
    
    if [[ "$status_code" == "$expected_status" && "$body" == *"$expected_body"* ]]; then
        return 0
    else
        return 1
    fi
}

wait_for_port() {
    local port=$1
    local timeout=$2
    
    for i in $(seq 1 "$timeout"); do
        if netstat -tlnp 2>/dev/null | grep -q ":$port "; then
            return 0
        fi
        sleep 1
    done
    return 1
}

# Test runner
run_test() {
    local test_name=$1
    local test_func=$2
    
    TOTAL_TESTS=$((TOTAL_TESTS + 1))
    echo -e "\n${YELLOW}Running test: $test_name${NC}"
    
    if "$test_func"; then
        echo -e "${GREEN}✓ Test passed: $test_name${NC}"
        PASSED_TESTS=$((PASSED_TESTS + 1))
        return 0
    else
        echo -e "${RED}✗ Test failed: $test_name${NC}"
        FAILED_TESTS=$((FAILED_TESTS + 1))
        return 1
    fi
}

# Test functions
test_daemon_start() {
    start_daemon
    
    if [[ ! -f "$TEST_PID_FILE" ]]; then
        echo "PID file not created"
        return 1
    fi
    
    if ! daemon_status | grep -q "running"; then
        echo "Daemon not running"
        return 1
    fi
    
    if ! wait_for_port $TEST_PORT 10; then
        echo "Port $TEST_PORT not listening"
        return 1
    fi
    
    stop_daemon
    return 0
}

test_daemon_stop() {
    start_daemon
    
    if ! daemon_status | grep -q "running"; then
        echo "Daemon not running after start"
        return 1
    fi
    
    stop_daemon
    
    if daemon_status | grep -q "running"; then
        echo "Daemon still running after stop"
        return 1
    fi
    
    if [[ -f "$TEST_PID_FILE" ]]; then
        echo "PID file not removed"
        return 1
    fi
    
    return 0
}

test_daemon_status() {
    # Test status when not running
    if daemon_status | grep -q "running"; then
        echo "Daemon should not be running"
        return 1
    fi
    
    # Test status when running
    start_daemon
    
    if ! daemon_status | grep -q "running"; then
        echo "Daemon should be running"
        return 1
    fi
    
    stop_daemon
    return 0
}

test_http_root_endpoint() {
    start_daemon
    
    if ! check_http_endpoint "http://localhost:$TEST_PORT/" "200" "Hello World"; then
        echo "Root endpoint not working"
        return 1
    fi
    
    stop_daemon
    return 0
}

test_http_health_endpoint() {
    start_daemon
    
    if ! check_http_endpoint "http://localhost:$TEST_PORT/health" "200" "healthy"; then
        echo "Health endpoint not working"
        return 1
    fi
    
    stop_daemon
    return 0
}

test_http_404_endpoint() {
    start_daemon
    
    local response=$(curl -s -w "%{http_code}" "http://localhost:$TEST_PORT/invalid")
    local status_code=${response: -3}
    
    if [[ "$status_code" != "404" ]]; then
        echo "404 endpoint not working"
        return 1
    fi
    
    stop_daemon
    return 0
}

test_custom_port() {
    local custom_port=19090
    
    "$PROCMAN_BIN" -port $custom_port start > /dev/null 2>&1
    sleep 2
    
    if ! wait_for_port $custom_port 10; then
        echo "Custom port not listening"
        stop_daemon
        return 1
    fi
    
    if ! check_http_endpoint "http://localhost:$custom_port/" "200" "Hello World"; then
        echo "Custom port endpoint not working"
        stop_daemon
        return 1
    fi
    
    stop_daemon
    return 0
}

test_environment_variables() {
    local env_port=20000
    export PROCMAN_PORT=$env_port
    
    start_daemon
    
    if ! wait_for_port $env_port 10; then
        echo "Environment variable port not working"
        stop_daemon
        return 1
    fi
    
    if ! check_http_endpoint "http://localhost:$env_port/" "200" "Hello World"; then
        echo "Environment variable endpoint not working"
        stop_daemon
        return 1
    fi
    
    stop_daemon
    unset PROCMAN_PORT
    return 0
}

test_concurrent_requests() {
    start_daemon
    
    # Send 50 concurrent requests
    local success_count=0
    for i in {1..50}; do
        if check_http_endpoint "http://localhost:$TEST_PORT/" "200" "Hello World"; then
            success_count=$((success_count + 1))
        fi
    done
    
    stop_daemon
    
    if [[ $success_count -ne 50 ]]; then
        echo "Only $success_count/50 concurrent requests succeeded"
        return 1
    fi
    
    return 0
}

test_foreground_mode() {
    # Test foreground mode with timeout
    timeout 5s "$PROCMAN_BIN" run > /tmp/procman-run.log 2>&1 &
    local run_pid=$!
    
    sleep 2
    
    if ! wait_for_port $TEST_PORT 10; then
        echo "Foreground mode port not listening"
        kill $run_pid 2>/dev/null || true
        return 1
    fi
    
    if ! check_http_endpoint "http://localhost:$TEST_PORT/" "200" "Hello World"; then
        echo "Foreground mode endpoint not working"
        kill $run_pid 2>/dev/null || true
        return 1
    fi
    
    # Wait for timeout to kill the process
    wait $run_pid 2>/dev/null || true
    
    return 0
}

test_invalid_configurations() {
    # Test invalid port
    if "$PROCMAN_BIN" -port 99999 start 2>/dev/null; then
        echo "Invalid port should fail"
        return 1
    fi
    
    # Test invalid PID file path
    if "$PROCMAN_BIN" -pid-file "/invalid/path/procman.pid" start 2>/dev/null; then
        echo "Invalid PID file path should fail"
        return 1
    fi
    
    return 0
}

test_port_conflict() {
    # Start a simple server on the test port
    python3 -m http.server $TEST_PORT > /dev/null 2>&1 &
    local python_pid=$!
    sleep 1
    
    # Try to start procman on the same port
    if "$PROCMAN_BIN" start 2>/dev/null; then
        echo "Port conflict should fail"
        kill $python_pid 2>/dev/null || true
        return 1
    fi
    
    kill $python_pid 2>/dev/null || true
    return 0
}

# Main test execution
main() {
    echo -e "${YELLOW}Starting Procman E2E Tests${NC}"
    echo "================================"
    
    setup
    
    # Run all tests
    run_test "Daemon Start" test_daemon_start
    run_test "Daemon Stop" test_daemon_stop
    run_test "Daemon Status" test_daemon_status
    run_test "HTTP Root Endpoint" test_http_root_endpoint
    run_test "HTTP Health Endpoint" test_http_health_endpoint
    run_test "HTTP 404 Endpoint" test_http_404_endpoint
    run_test "Custom Port" test_custom_port
    run_test "Environment Variables" test_environment_variables
    run_test "Concurrent Requests" test_concurrent_requests
    run_test "Foreground Mode" test_foreground_mode
    run_test "Invalid Configurations" test_invalid_configurations
    run_test "Port Conflict" test_port_conflict
    
    # Print results
    echo -e "\n${YELLOW}Test Results${NC}"
    echo "================================"
    echo -e "Total tests: ${TOTAL_TESTS}"
    echo -e "${GREEN}Passed: ${PASSED_TESTS}${NC}"
    echo -e "${RED}Failed: ${FAILED_TESTS}${NC}"
    
    if [[ $FAILED_TESTS -eq 0 ]]; then
        echo -e "\n${GREEN}All tests passed! 🎉${NC}"
        exit 0
    else
        echo -e "\n${RED}Some tests failed. 😞${NC}"
        exit 1
    fi
}

# Trap cleanup on exit
trap cleanup EXIT

# Run main function
main "$@"

test-stress.sh

#!/bin/bash

# Stress test script for procman daemon

set -e

# Configuration
TEST_PORT=18080
PROCMAN_BIN="./procman"
TEST_DURATION=300  # 5 minutes

# Colors
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
NC='\033[0m'

# Setup
setup() {
    echo -e "${YELLOW}Setting up stress test...${NC}"
    go build -o "$PROCMAN_BIN" ./cmd/procman
    mkdir -p /tmp/procman-test
    export PROCMAN_PORT=$TEST_PORT
    export PROCMAN_PID_FILE="/tmp/procman-test/procman.pid"
    export PROCMAN_LOG_FILE="/tmp/procman-test/procman.log"
}

# Cleanup
cleanup() {
    echo -e "${YELLOW}Cleaning up...${NC}"
    "$PROCMAN_BIN" stop > /dev/null 2>&1 || true
    rm -f "$PROCMAN_BIN"
    rm -rf /tmp/procman-test
}

# Stress test functions
stress_test_concurrent_requests() {
    echo -e "${YELLOW}Starting concurrent request stress test...${NC}"
    
    # Start daemon
    "$PROCMAN_BIN" start > /dev/null 2>&1
    sleep 2
    
    local start_time=$(date +%s)
    local end_time=$((start_time + TEST_DURATION))
    local request_count=0
    local error_count=0
    
    echo "Running stress test for $TEST_DURATION seconds..."
    
    while [[ $(date +%s) -lt $end_time ]]; do
        # Send 10 concurrent requests
        for i in {1..10}; do
            {
                if curl -s "http://localhost:$TEST_PORT/" > /dev/null; then
                    echo "success" >> /tmp/procman-test/stress_results.txt
                else
                    echo "error" >> /tmp/procman-test/stress_results.txt
                fi
            } &
        done
        wait
        
        request_count=$((request_count + 10))
        
        # Print progress
        local elapsed=$(( $(date +%s) - start_time ))
        local remaining=$((end_time - $(date +%s)))
        echo -r "\rProgress: $elapsed/$TEST_DURATION seconds, Requests: $request_count"
    done
    
    echo -e "\n${YELLOW}Analyzing results...${NC}"
    
    local successes=$(grep -c "success" /tmp/procman-test/stress_results.txt || echo "0")
    local errors=$(grep -c "error" /tmp/procman-test/stress_results.txt || echo "0")
    
    echo "Total requests: $request_count"
    echo "Successful requests: $successes"
    echo "Failed requests: $errors"
    
    if [[ $errors -eq 0 ]]; then
        echo -e "${GREEN}✓ Stress test passed - no errors!${NC}"
    else
        echo -e "${RED}✗ Stress test failed - $errors errors${NC}"
    fi
    
    # Stop daemon
    "$PROCMAN_BIN" stop > /dev/null 2>&1
    
    return $([[ $errors -eq 0 ]])
}

stress_test_memory_usage() {
    echo -e "${YELLOW}Starting memory usage stress test...${NC}"
    
    # Start daemon
    "$PROCMAN_BIN" start > /dev/null 2>&1
    sleep 2
    
    local start_time=$(date +%s)
    local end_time=$((start_time + TEST_DURATION))
    local pid=$(cat /tmp/procman-test/procman.pid)
    
    echo "Monitoring memory usage for $TEST_DURATION seconds..."
    
    local max_memory=0
    local initial_memory=$(ps -p $pid -o rss= | tr -d ' ')
    
    while [[ $(date +%s) -lt $end_time ]]; do
        local current_memory=$(ps -p $pid -o rss= | tr -d ' ' 2>/dev/null || echo "0")
        
        if [[ $current_memory -gt $max_memory ]]; then
            max_memory=$current_memory
        fi
        
        # Send some requests
        curl -s "http://localhost:$TEST_PORT/" > /dev/null 2>&1 || true
        
        sleep 1
    done
    
    echo -e "${YELLOW}Memory usage results:${NC}"
    echo "Initial memory: ${initial_memory}KB"
    echo "Maximum memory: ${max_memory}KB"
    echo "Memory growth: $((max_memory - initial_memory))KB"
    
    # Check for memory leaks (arbitrary threshold: 10MB growth)
    local growth=$((max_memory - initial_memory))
    local threshold=10240  # 10MB in KB
    
    if [[ $growth -lt $threshold ]]; then
        echo -e "${GREEN}✓ Memory usage test passed - no significant memory leak${NC}"
    else
        echo -e "${RED}✗ Memory usage test failed - possible memory leak${NC}"
    fi
    
    # Stop daemon
    "$PROCMAN_BIN" stop > /dev/null 2>&1
    
    return $([[ $growth -lt $threshold ]])
}

# Main stress test execution
main() {
    echo -e "${YELLOW}Starting Procman Stress Tests${NC}"
    echo "================================"
    
    setup
    
    # Run stress tests
    stress_test_concurrent_requests
    stress_test_memory_usage
    
    echo -e "\n${GREEN}Stress tests completed!${NC}"
}

# Trap cleanup on exit
trap cleanup EXIT

# Run main function
main "$@"

test-manual.sh

#!/bin/bash

# Manual test guide for procman daemon

echo "=== Procman Manual Test Guide ==="
echo ""

echo "This script provides step-by-step instructions for manual testing."
echo "Please follow each test case and verify the expected results."
echo ""

# Build instructions
echo "1. Build the procman binary:"
echo "   go build -o procman ./cmd/procman"
echo ""

# Test environment setup
echo "2. Set up test environment:"
echo "   export PROCMAN_PORT=18080"
echo "   export PROCMAN_PID_FILE=/tmp/procman-test/procman.pid"
echo "   export PROCMAN_LOG_FILE=/tmp/procman-test/procman.log"
echo "   mkdir -p /tmp/procman-test"
echo ""

# Basic functionality tests
echo "=== Basic Functionality Tests ==="
echo ""

echo "Test 1: Start daemon"
echo "   Command: ./procman start"
echo "   Expected: Daemon starts, PID file created, port 18080 listening"
echo "   Verify: ps aux | grep procman, cat /tmp/procman-test/procman.pid"
echo ""

echo "Test 2: Check status"
echo "   Command: ./procman status"
echo "   Expected: Shows 'Daemon is running with PID <pid>'"
echo ""

echo "Test 3: Test HTTP endpoints"
echo "   Command: curl http://localhost:18080/"
echo "   Expected: 'Hello World'"
echo ""
echo "   Command: curl http://localhost:18080/health"
echo "   Expected: '{\"status\": \"healthy\"}'"
echo ""

echo "Test 4: Stop daemon"
echo "   Command: ./procman stop"
echo "   Expected: Daemon stops, PID file removed"
echo "   Verify: ./procman status shows 'Daemon is not running'"
echo ""

# Configuration tests
echo "=== Configuration Tests ==="
echo ""

echo "Test 5: Custom port"
echo "   Command: ./procman -port 19090 start"
echo "   Expected: Daemon starts on port 19090"
echo "   Verify: curl http://localhost:19090/"
echo "   Cleanup: ./procman -port 19090 stop"
echo ""

echo "Test 6: Environment variables"
echo "   Command: export PROCMAN_PORT=20000"
echo "   Command: ./procman start"
echo "   Expected: Daemon starts on port 20000"
echo "   Verify: curl http://localhost:20000/"
echo "   Cleanup: ./procman stop; unset PROCMAN_PORT"
echo ""

# Error handling tests
echo "=== Error Handling Tests ==="
echo ""

echo "Test 7: Port conflict"
echo "   Command: python3 -m http.server 18080 &"
echo "   Command: ./procman start"
echo "   Expected: Error about port being in use"
echo "   Cleanup: kill %1; ./procman stop"
echo ""

echo "Test 8: Invalid port"
echo "   Command: ./procman -port 99999 start"
echo "   Expected: Error about invalid port"
echo ""

echo "Test 9: Stop non-running daemon"
echo "   Command: ./procman stop"
echo "   Expected: Error about daemon not running"
echo ""

# Foreground mode tests
echo "=== Foreground Mode Tests ==="
echo ""

echo "Test 10: Run in foreground"
echo "   Command: ./procman run"
echo "   Expected: Daemon runs in foreground"
echo "   Test: In another terminal, curl http://localhost:18080/"
echo "   Cleanup: Press Ctrl+C"
echo ""

# Cross-platform tests
echo "=== Cross-Platform Tests ==="
echo ""

echo "Test 11: Build for different platforms"
echo "   Command: GOOS=linux GOARCH=amd64 go build -o procman-linux ./cmd/procman"
echo "   Command: GOOS=windows GOARCH=amd64 go build -o procman.exe ./cmd/procman"
echo "   Command: GOOS=darwin GOARCH=amd64 go build -o procman-macos ./cmd/procman"
echo "   Expected: Binaries build successfully"
echo ""

echo "=== Manual Test Complete ==="
echo ""
echo "Please verify each test case matches the expected results."
echo "Report any discrepancies or issues found."

Test Execution

Running Automated Tests

# Make test scripts executable
chmod +x test-e2e.sh test-stress.sh test-manual.sh

# Run comprehensive E2E tests
./test-e2e.sh

# Run stress tests
./test-stress.sh

# Run manual test guide
./test-manual.sh

Running Individual Tests

# Build the binary
go build -o procman ./cmd/procman

# Run specific test scenarios
./procman start
./procman status
./procman stop

# Test with different configurations
./procman -port 19090 start
curl http://localhost:19090/
./procman -port 19090 stop

Test Reporting

Expected Output Format

Starting Procman E2E Tests
================================

Running test: Daemon Start
✓ Test passed: Daemon Start

Running test: Daemon Stop
✓ Test passed: Daemon Stop

...

Test Results
================================
Total tests: 12
Passed: 12
Failed: 0

All tests passed! 🎉

Test Results Documentation

  • All test results should be documented
  • Failed tests should include detailed error information
  • Performance metrics should be recorded for stress tests
  • Cross-platform compatibility should be verified and documented

Continuous Integration

GitHub Actions Integration

name: E2E Tests
on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: actions/setup-go@v2
        with:
          go-version: '1.23'
      - name: Run E2E tests
        run: |
          chmod +x test-e2e.sh
          ./test-e2e.sh
      - name: Run stress tests
        run: |
          chmod +x test-stress.sh
          ./test-stress.sh

This comprehensive test suite covers all aspects of the procman daemon functionality and provides both automated and manual testing capabilities.