
### **AI Coding Prompt (Revised for Positional Control): "Pose Breaker"**

#### **1. Project Overview**

*   **Application Name:** Pose Breaker
*   **Core Goal:** To create a web-based, retro-style brick-breaker arcade game where the player controls the paddle's **absolute position** in real-time by tilting their body, tracked via the computer's webcam.
*   **Target User:** Casual gamers and users looking for a novel, physically interactive web experience.
*   **Key Features List:**
    *   Classic brick-breaker gameplay (paddle, ball, bricks).
    *   Real-time player body pose detection using the webcam.
    *   **Positional Paddle Control:** The paddle's horizontal position directly maps to the player's shoulder tilt angle. An upright posture centers the paddle.
    *   Game UI includes a main game area, a score/lives display, and a small picture-in-picture (PiP) view of the webcam feed for user feedback.
    *   Complete game flow: Start Screen, Gameplay, Game Over screen, and Win screen.
    *   Aesthetic: 8-bit retro, pixel-art style.

#### **2. Step-by-step Module Breakdown**

This project will be built as a single-page web application using HTML, CSS, and vanilla JavaScript, leveraging the TensorFlow.js library.

**Module 1: Core Technology Setup (Webcam and Pose Detection)**

This module is foundational, setting up the connection between the user's camera and the AI model.

*   **HTML Structure:** A `<video>` element (initially hidden), a main `<canvas>` element, and UI elements like buttons and status messages.
*   **JavaScript Logic:**
    1.  **Library Loading & Initialization:** Load TensorFlow.js and pose-detection models from CDN. Create an `init` function that waits for `tf.ready()`, gets DOM element references, and pre-loads the MoveNet model.
    2.  **Model Loading (`loadModel`):** Create a detector instance for `poseDetection.SupportedModels.MoveNet` with `modelType: poseDetection.movenet.modelType.SINGLEPOSE_LIGHTNING` for maximum performance.
    3.  **Camera Handling (`startCamera`):** Use `navigator.mediaDevices.getUserMedia()` to get video stream. Once running, start the main detection loop.
    4.  **Detection Loop (`detectPose`):** Use `requestAnimationFrame` to continuously run pose estimation (`detector.estimatePoses(video)`). If a pose is found, pass it to the control logic (`updatePaddlePosition`) and the drawing functions.

**Module 2: Positional Control Integration and Logic**

This is the core change. We now map body tilt directly to the paddle's X-coordinate.

*   **Action:**
    1.  Create the `updatePaddlePosition(pose)` function.
    2.  Inside, find the left and right shoulder keypoints, ensuring their confidence scores are above a threshold (e.g., 0.3).
    3.  Calculate a `tiltFactor` based on the y-coordinate difference: `tiltFactor = rightShoulder.y - leftShoulder.y;`. **Note the order:** this makes tilting right produce a positive value.
    4.  **Map Tilt to Position:**
        *   Define a `MAX_TILT` constant. This represents the maximum expected `tiltFactor` value when a user tilts fully. This value (e.g., 80) may need tuning.
        *   Normalize the tilt: `normalizedTilt = tiltFactor / MAX_TILT;`. Clamp this value between -1 (full left tilt) and 1 (full right tilt) using `Math.max(-1, Math.min(1, ...))`.
        *   The paddle's movement range is from `0` to `canvas.width - paddle.width`.
        *   Convert the `normalizedTilt` (from -1 to 1) to a screen position. A value of 0 should center the paddle. A value of 1 should place it at the far right.
        *   Calculate the final position: `paddle.x = centerOfRange + (normalizedTilt * halfOfRange);`.

*   **Code Example (Positional Control Logic):**
    ```javascript
    // === In your main game setup ===
    // const gameCanvas = document.getElementById('gameCanvas');
    let paddle = { x: gameCanvas.width / 2 - 50, y: 580, width: 100, height: 10 };

    // === The new control function ===
    function updatePaddlePosition(pose) {
        const leftShoulder = pose.keypoints.find(k => k.name === 'left_shoulder');
        const rightShoulder = pose.keypoints.find(k => k.name === 'right_shoulder');

        if (leftShoulder && rightShoulder && leftShoulder.score > 0.3 && rightShoulder.score > 0.3) {
            // This value may need to be tuned for user comfort. It's the pixel difference 
            // between shoulders at a comfortable maximum tilt.
            const MAX_TILT = 80; 

            // rightShoulder.y - leftShoulder.y makes tilting right produce a positive value.
            const tiltFactor = rightShoulder.y - leftShoulder.y;

            // Normalize the tilt to a range of -1.0 to 1.0
            let normalizedTilt = tiltFactor / MAX_TILT;
            normalizedTilt = Math.max(-1, Math.min(1, normalizedTilt)); // Clamp the value

            const movementRange = gameCanvas.width - paddle.width;
            const paddleX = (movementRange / 2) * (1 + normalizedTilt);

            // Directly set the paddle's position
            paddle.x = paddleX;
        }
    }
    ```

---

#### **3. Priority Order (CRITICAL DEVELOPMENT ROADMAP)**

Follow this staged approach. **Do not proceed to the next step until the current one is fully working.**

*   **✅ Step 1: Milestone 1 - The Visual Control System Test (COMPLETED)**
    *   **Goal:** To create a visual testbed to confirm the new positional control works perfectly, while providing the user with direct visual feedback.
    *   **Implementation:**
        1.  ✅ Implement the full **Module 1** setup (HTML, JS, TensorFlow.js init, camera).
        2.  ✅ Implement **Module 2**, the new `updatePaddlePosition` positional logic.
        3.  ✅ Create a main `gameLoop` function using `requestAnimationFrame`.
        4.  ✅ **Inside the loop:**
            a.  Clear the main canvas.
            b.  **Draw the mirrored webcam video feed directly onto the canvas as the background.** This is crucial for user feedback. (`ctx.drawImage(video, ...)`).
            c.  **Draw the paddle rectangle on top of the video feed** at its updated `paddle.x` position.
    *   **Test Criteria:** ✅ Run the code. You should see your own camera image on the screen. When your body is upright, the paddle is centered. As you tilt your body to the right, the paddle should smoothly move to the right side of the screen, its position directly corresponding to your tilt angle. Tilting left moves it left. **This milestone is complete when the control feels intuitive and correct over your own video.**

*   **✅ Step 2: Build the Core Game Mechanics (COMPLETED)**
    *   **Goal:** With the controls verified, build the full game.
    *   **Implementation:**
        1.  ✅ Remove the video background drawing from the main loop (or replace it with a game background).
        2.  ✅ Add the Ball entity, Bricks, collision detection, scoring, and lives system.
        3.  ✅ Implement win/loss conditions (`GAME_OVER`, `WIN` states).

*   **✅ Step 3: Add UI, Polish, and Game Flow (COMPLETED)**
    *   **Goal:** Wrap the game in a complete user experience.
    *   **Implementation:**
        1.  ✅ Implement the UI screens (Start, Game Over, Win).
        2.  ✅ Apply the 8-bit retro styling.
        3.  ✅ Implement the separate, small picture-in-picture (PiP) canvas for the final UI, which will show the user's skeleton for feedback during gameplay.

#### **4. CURRENT STATUS (Updated)**

**🎉 IMPLEMENTATION COMPLETE:** All planned features have been successfully implemented and are working:

*   ✅ **Core Technology:** TensorFlow.js pose detection with MoveNet Lightning model
*   ✅ **Positional Control:** Body tilt directly controls paddle position using shoulder angle mapping
*   ✅ **Game Mechanics:** Full brick-breaker game with ball physics, collision detection, scoring, and lives
*   ✅ **Enhanced Controls:** Arm raise gesture to launch ball (beyond original plan)
*   ✅ **Complete UI:** Game states (waiting, playing, game over, win) with 8-bit retro styling
*   ✅ **Picture-in-Picture:** Real-time webcam feed with skeleton overlay for user feedback
*   ✅ **Game Flow:** Restart functionality with spacebar, proper state management
*   ✅ **Responsive Design:** Proper canvas sizing and layout

**🔧 POTENTIAL ENHANCEMENTS:**
*   Ball speed adjustment (current user feedback: too slow)
*   Difficulty levels with different brick arrangements
*   Power-ups activated by specific poses
*   Sound effects and background music
*   High score persistence

#### **5. UI Design (8-Bit Retro Style)**

*   **General Style:** Use a pixel font ("Press Start 2P"), a limited NES-style color palette, and disable image smoothing (`context.imageSmoothingEnabled = false;`) for a sharp, pixelated look.
*   **Page Breakdown:**
    *   **Start Screen:** Black background, centered title "POSE BREAKER", flashing "START GAME" button, and instruction text.
    *   **Game Screen:** Main game area. Top-right: Score and Lives. Bottom-right: The small PiP canvas showing the user's video and skeleton overlay.
    *   **Game Over / Win Screen:** A semi-transparent overlay with centered text, final score, and a "PLAY AGAIN" button.

#### **6. Technology Stack Selection**

*   **Frontend:** **Vanilla JavaScript, HTML5, and CSS.**
*   **AI/ML Library:** **TensorFlow.js with the MoveNet model.**
    *   **Reasoning:** High-performance, client-side, industry standard. The "lightning" variant is essential for low-latency control.
    *   **Implementation:** Load libraries from CDN for simplicity.
*   **Deployment:** **Vercel, Netlify, or GitHub Pages.**
    *   **Reasoning:** Free, simple deployment with automatic HTTPS, which is **mandatory for webcam access**.