AI Vision Monitoring Platform
Client
In-House Product
Duration
Role
UX Manager
This project is under NDA. Below is a summary. Reach out for process and final designs.
What
Our customers install networked cameras across factory floors, construction sites, and public areas to monitor:
Equipment inspection
Hazardous zone identification
Object counting
Predictive maintenance
Analog instrument reading
Crowd analytics
They need a single interface to watch live feeds, review recordings, interact with video (pause, seek, screenshot), filter by use-case or camera group, and view alerts and analytics—all without switching tools.
Why
Existing video systems force operators to juggle separate players, spreadsheets, and email alerts. This creates friction:
Fragmented UI: Live streams, recordings, and analytics live in different apps
Manual workflows: Teams download clips, then run analytics in Excel—slow and error-prone
Poor filtering: Hard to focus on specific cameras, floors, or items (e.g., helmet compliance)
Alert fatigue: Notifications aren’t tied back to video evidence, so context is lost
By unifying video control, filtering, alerts, and reporting, we reduce response time and improve situational awareness.
User and Business Goals
Users
Operators & Safety Managers need:
One-click expand of any feed (live or recorded)
Seamless pause, rewind, and screenshot
Dropdown filters to focus on helmets, PPE, or specific floor views
Real-time alerts linked to video evidence
Analysts & Engineers need:
Unified dashboard for video-driven KPIs (counts, readings, compliance rates)
Ability to bookmark clips and export reports
Business:
Reduce incident response time by surfacing evidence instantly
Automate reporting to cut down manual data aggregation
Scale the platform to support hundreds of cameras and multiple use cases without new development
How
We followed an incremental, UX-driven process:
Discovery & Research
Interviewed site supervisors, maintenance teams, and safety officers
Mapped existing toolchains and pain points
Analyzed competitor flows (e.g. viso.ai) for best practices
Initial Prototypes
Used Cursor-generated React components to build a working multi-camera grid and video player
Tested live/recorded toggle, expand on click, and playback controls with operators
Iterative Design & Validation
Introduced dropdown filters for use case (helmet, hazard zone) and camera groups (floor, area)
Designed alert center that links each notification to a video snippet
Built analytics dashboard wireframes showing object counts over time and analog readings
Collaboration with Developers
Defined API contracts for video feeds, metadata, and alert events
Integrated Cursor-generated UI with backend services
Conducted usability sessions on staging build to refine microcopy and toolbar access
Potential Impact
Once live, we expect to see:
30–50% faster incident investigation, as operators click directly from alerts to video
20% reduction in manual reporting hours with built-in analytics exports
Higher compliance through real-time PPE monitoring and alerts
Continuous Improvement
Future epics planned post-launch:
Advanced AI Insights: Automated root-cause suggestions (e.g., vibration spikes → maintenance ticket)
Mobile App: Critical-alerts only view for field supervisors
Custom Reporting Templates: Drag-and-drop report builder with video embed support
User-Defined Workflows: Save and share common filter sets and camera layouts
This case study highlights how we turned complex, fragmented video monitoring into a cohesive, user-centric AI vision platform ready to scale.