Files
geutebruck/geutebruck-api/.claude/commands/speckit.tasks.md
Administrator 14893e62a5 feat: Geutebruck GeViScope/GeViSoft Action Mapping System - MVP
This MVP release provides a complete full-stack solution for managing action mappings
in Geutebruck's GeViScope and GeViSoft video surveillance systems.

## Features

### Flutter Web Application (Port 8081)
- Modern, responsive UI for managing action mappings
- Action picker dialog with full parameter configuration
- Support for both GSC (GeViScope) and G-Core server actions
- Consistent UI for input and output actions with edit/delete capabilities
- Real-time action mapping creation, editing, and deletion
- Server categorization (GSC: prefix for GeViScope, G-Core: prefix for G-Core servers)

### FastAPI REST Backend (Port 8000)
- RESTful API for action mapping CRUD operations
- Action template service with comprehensive action catalog (247 actions)
- Server management (G-Core and GeViScope servers)
- Configuration tree reading and writing
- JWT authentication with role-based access control
- PostgreSQL database integration

### C# SDK Bridge (gRPC, Port 50051)
- Native integration with GeViSoft SDK (GeViProcAPINET_4_0.dll)
- Action mapping creation with correct binary format
- Support for GSC and G-Core action types
- Proper Camera parameter inclusion in action strings (fixes CrossSwitch bug)
- Action ID lookup table with server-specific action IDs
- Configuration reading/writing via SetupClient

## Bug Fixes
- **CrossSwitch Bug**: GSC and G-Core actions now correctly display camera/PTZ head parameters in GeViSet
- Action strings now include Camera parameter: `@ PanLeft (Comment: "", Camera: 101028)`
- Proper filter flags and VideoInput=0 for action mappings
- Correct action ID assignment (4198 for GSC, 9294 for G-Core PanLeft)

## Technical Stack
- **Frontend**: Flutter Web, Dart, Dio HTTP client
- **Backend**: Python FastAPI, PostgreSQL, Redis
- **SDK Bridge**: C# .NET 8.0, gRPC, GeViSoft SDK
- **Authentication**: JWT tokens
- **Configuration**: GeViSoft .set files (binary format)

## Credentials
- GeViSoft/GeViScope: username=sysadmin, password=masterkey
- Default admin: username=admin, password=admin123

## Deployment
All services run on localhost:
- Flutter Web: http://localhost:8081
- FastAPI: http://localhost:8000
- SDK Bridge gRPC: localhost:50051
- GeViServer: localhost (default port)

Generated with Claude Code (https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-31 18:10:54 +01:00

5.9 KiB

description
description
Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.

User Input

$ARGUMENTS

You MUST consider the user input before proceeding (if not empty).

Outline

  1. Setup: Run .specify/scripts/powershell/check-prerequisites.ps1 -Json from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'''m Groot' (or double-quote if possible: "I'm Groot").

  2. Load design documents: Read from FEATURE_DIR:

    • Required: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities)
    • Optional: data-model.md (entities), contracts/ (API endpoints), research.md (decisions), quickstart.md (test scenarios)
    • Note: Not all projects have all documents. Generate tasks based on what's available.
  3. Execute task generation workflow:

    • Load plan.md and extract tech stack, libraries, project structure
    • Load spec.md and extract user stories with their priorities (P1, P2, P3, etc.)
    • If data-model.md exists: Extract entities and map to user stories
    • If contracts/ exists: Map endpoints to user stories
    • If research.md exists: Extract decisions for setup tasks
    • Generate tasks organized by user story (see Task Generation Rules below)
    • Generate dependency graph showing user story completion order
    • Create parallel execution examples per user story
    • Validate task completeness (each user story has all needed tasks, independently testable)
  4. Generate tasks.md: Use .specify.specify/templates/tasks-template.md as structure, fill with:

    • Correct feature name from plan.md
    • Phase 1: Setup tasks (project initialization)
    • Phase 2: Foundational tasks (blocking prerequisites for all user stories)
    • Phase 3+: One phase per user story (in priority order from spec.md)
    • Each phase includes: story goal, independent test criteria, tests (if requested), implementation tasks
    • Final Phase: Polish & cross-cutting concerns
    • All tasks must follow the strict checklist format (see Task Generation Rules below)
    • Clear file paths for each task
    • Dependencies section showing story completion order
    • Parallel execution examples per story
    • Implementation strategy section (MVP first, incremental delivery)
  5. Report: Output path to generated tasks.md and summary:

    • Total task count
    • Task count per user story
    • Parallel opportunities identified
    • Independent test criteria for each story
    • Suggested MVP scope (typically just User Story 1)
    • Format validation: Confirm ALL tasks follow the checklist format (checkbox, ID, labels, file paths)

Context for task generation: $ARGUMENTS

The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.

Task Generation Rules

CRITICAL: Tasks MUST be organized by user story to enable independent implementation and testing.

Tests are OPTIONAL: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.

Checklist Format (REQUIRED)

Every task MUST strictly follow this format:

- [ ] [TaskID] [P?] [Story?] Description with file path

Format Components:

  1. Checkbox: ALWAYS start with - [ ] (markdown checkbox)
  2. Task ID: Sequential number (T001, T002, T003...) in execution order
  3. [P] marker: Include ONLY if task is parallelizable (different files, no dependencies on incomplete tasks)
  4. [Story] label: REQUIRED for user story phase tasks only
    • Format: [US1], [US2], [US3], etc. (maps to user stories from spec.md)
    • Setup phase: NO story label
    • Foundational phase: NO story label
    • User Story phases: MUST have story label
    • Polish phase: NO story label
  5. Description: Clear action with exact file path

Examples:

  • CORRECT: - [ ] T001 Create project structure per implementation plan
  • CORRECT: - [ ] T005 [P] Implement authentication middleware in src/middleware/auth.py
  • CORRECT: - [ ] T012 [P] [US1] Create User model in src/models/user.py
  • CORRECT: - [ ] T014 [US1] Implement UserService in src/services/user_service.py
  • WRONG: - [ ] Create User model (missing ID and Story label)
  • WRONG: T001 [US1] Create model (missing checkbox)
  • WRONG: - [ ] [US1] Create User model (missing Task ID)
  • WRONG: - [ ] T001 [US1] Create model (missing file path)

Task Organization

  1. From User Stories (spec.md) - PRIMARY ORGANIZATION:

    • Each user story (P1, P2, P3...) gets its own phase
    • Map all related components to their story:
      • Models needed for that story
      • Services needed for that story
      • Endpoints/UI needed for that story
      • If tests requested: Tests specific to that story
    • Mark story dependencies (most stories should be independent)
  2. From Contracts:

    • Map each contract/endpoint → to the user story it serves
    • If tests requested: Each contract → contract test task [P] before implementation in that story's phase
  3. From Data Model:

    • Map each entity to the user story(ies) that need it
    • If entity serves multiple stories: Put in earliest story or Setup phase
    • Relationships → service layer tasks in appropriate story phase
  4. From Setup/Infrastructure:

    • Shared infrastructure → Setup phase (Phase 1)
    • Foundational/blocking tasks → Foundational phase (Phase 2)
    • Story-specific setup → within that story's phase

Phase Structure

  • Phase 1: Setup (project initialization)
  • Phase 2: Foundational (blocking prerequisites - MUST complete before user stories)
  • Phase 3+: User Stories in priority order (P1, P2, P3...)
    • Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
    • Each phase should be a complete, independently testable increment
  • Final Phase: Polish & Cross-Cutting Concerns