Skip to main content

System Architecture

This section provides a comprehensive overview of how the tracking backend is structured and how its components interact internally.

Project Structure

tracking_backend/
├── app/
│   ├── main.py              # FastAPI application entry point & lifespan
│   ├── config.py            # Environment configuration (Settings class)
│   ├── database.py          # SQLAlchemy engine & session management
│   ├── models.py            # Database ORM models
│   ├── schemas.py           # Pydantic request/response schemas
│   ├── crud.py              # Database CRUD operations
│   ├── routers/             # API endpoint modules
│   │   ├── health.py
│   │   ├── movements.py
│   │   ├── drivers.py
│   │   ├── tractors.py
│   │   ├── trailers.py
│   │   ├── order_stops.py
│   │   └── chat.py
│   ├── services/            # Business logic & external integrations
│   │   ├── terminal_api.py      # ELD provider integration (Motive, Samsara, Geotab)
│   │   ├── mcleod_api.py        # McLeod TMS integration
│   │   ├── cache_service.py     # In-memory caching layer
│   │   ├── eta_service.py       # Mapbox ETA calculations
│   │   └── gemini_service.py    # AI chat functionality
│   └── sse/                 # Server-Sent Events streaming
│       └── location_stream.py
├── docker-compose.yml       # Container orchestration
├── Dockerfile               # Container image definition
└── requirements.txt         # Python dependencies

Core Design Patterns

DB-First Reads with Background Sync

The system follows a DB-first read pattern to ensure immediate API responses:
┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐
│   API Request   │────▶│   Database      │────▶│   Response      │
│                 │     │   (Fast Read)   │     │   (Immediate)   │
└─────────────────┘     └─────────────────┘     └─────────────────┘

                               │ Background Sync

┌─────────────────┐     ┌─────────────────┐
│ External APIs   │────▶│   Cache Layer   │
│ (Terminal/McLeod)│     │   (In-Memory)   │
└─────────────────┘     └─────────────────┘
  1. API requests read directly from PostgreSQL for immediate responses
  2. Background pollers continuously sync data from external APIs
  3. Cache layer reduces redundant external API calls
  4. Staleness tracking via updated_at timestamps

Dynamic Polling Strategy

Polling intervals adapt based on SSE client activity:
StateIntervalCondition
Active5-10 secondsSSE clients connected
Idle60 secondsNo active SSE clients
This is implemented in location_stream.py via a client counter that triggers faster polling when client_count > 0.

Token Bucket Rate Limiting

External API calls use a proactive token bucket algorithm:
# Simplified concept from terminal_api.py
class RateLimiter:
    def __init__(self, calls_per_minute):
        self.tokens = calls_per_minute
        self.last_refill = time.time()
    
    async def acquire(self):
        # Refill tokens based on elapsed time
        # Wait if no tokens available
        # Consume one token per call
This prevents hitting external rate limits (e.g., Samsara’s 1000 calls/min).

Data Flow Diagrams

Location Data Flow

┌──────────────┐    ┌──────────────┐    ┌──────────────┐
│   Samsara    │    │    Motive    │    │    Geotab    │
│   (ELD API)  │    │   (ELD API)  │    │   (ELD API)  │
└──────┬───────┘    └──────┬───────┘    └──────┬───────┘
       │                   │                   │
       └───────────────────┼───────────────────┘

                 ┌──────────────────┐
                 │   Terminal API   │
                 │  (terminal_api.py)│
                 └────────┬─────────┘

                 ┌──────────────────┐
                 │   Cache Layer    │
                 │(cache_service.py)│
                 └────────┬─────────┘

        ┌─────────────────┼─────────────────┐
        ▼                 ▼                 ▼
┌──────────────┐  ┌──────────────┐  ┌──────────────┐
│   tractors   │  │   trailers   │  │   drivers    │
│    table     │  │    table     │  │    table     │
└──────────────┘  └──────────────┘  └──────────────┘

Movement Data Flow

┌──────────────────┐
│   McLeod TMS     │
│   (SOAP/REST)    │
└────────┬─────────┘

┌──────────────────┐
│  mcleod_api.py   │
│  (sync_movements)│
└────────┬─────────┘

┌──────────────────┐
│   movements      │
│     table        │
└────────┬─────────┘

    ┌────┴────┐
    ▼         ▼
┌───────┐  ┌───────┐
│orders │  │ stops │
│ table │  │ table │
└───────┘  └───────┘

Database Models

Key Tables and Relationships

TablePurposeKey Fields
movementsLoad/shipment recordsid, order_id, status, driver_id
driversDriver profiles & HOSid, name, hos_status, location
tractorsVehicle data & locationsid, unit_number, latitude, longitude
trailersTrailer trackingid, unit_number, latitude, longitude
ordersOrder headersid, customer, movement_id
stopsPickup/delivery stopsid, order_id, type, eta

Staleness Tracking

Every table includes:
  • updated_at - Last database update timestamp
  • synced_at - Last external API sync timestamp (where applicable)
This enables clients to determine data freshness.

Service Layer Details

Terminal API Integration (terminal_api.py)

Handles communication with ELD providers:
  • Motive: OAuth2 authentication, REST API
  • Samsara: API key authentication, REST API
  • Geotab: Session-based authentication, MyGeotab SDK
Each provider has:
  • Connection pooling
  • Automatic retry with exponential backoff
  • Error normalization to common schema

Cache Service (cache_service.py)

In-memory caching with TTL:
# Cache structure
{
    "locations:tractor:{id}": {
        "data": {...},
        "expires_at": timestamp
    },
    "hos:driver:{id}": {
        "data": {...},
        "expires_at": timestamp
    }
}
Default TTLs:
  • Location data: 30 seconds
  • HOS data: 60 seconds
  • Driver profiles: 300 seconds

Gemini Service (gemini_service.py)

AI chat uses function calling with these tools:
ToolPurpose
get_driver_locationFetch driver’s current position
get_movement_statusCheck load status
get_etaCalculate arrival time
search_driversFind drivers by name/ID

Lifespan Management

FastAPI’s lifespan context manages background services:
@asynccontextmanager
async def lifespan(app: FastAPI):
    # Startup
    await start_background_pollers()
    await initialize_cache()
    
    yield  # Application runs
    
    # Shutdown
    await stop_background_pollers()
    await cleanup_connections()

Environment Configuration

Key settings from config.py:
VariablePurpose
DATABASE_URLPostgreSQL connection string
TERMINAL_API_KEYELD provider API key
MCLEOD_API_URLTMS integration endpoint
MAPBOX_TOKENETA calculation service
GEMINI_API_KEYAI chat functionality
POLLING_INTERVALBackground sync frequency

Next Steps