What Happened in the Last 4 Hours? Pulse Knows Before You Ask
Pulse is Sundae's real-time operations command center - tracking live revenue, shift performance, anomaly alerts, and hourly pace across every location. When something goes wrong mid-shift, Pulse tells you before the shift ends.
The 2:15pm Alert That Saved a Friday
Fatima managed operations for a 14-location restaurant group across Dubai. On a typical Friday, her routine was predictable: review morning flash reports at 8am, check in with GMs around 11am, address any afternoon issues as they surfaced, and compile the daily close report by 9pm. Most of the day was reactive - problems found her when they were already problems.
At 2:15pm on a Friday in January, her phone buzzed with a Pulse alert: "Location 7 lunch revenue 35% below hourly target. Current pace: AED 6,400 vs expected AED 9,800. Deviation started at approximately 11:30am."
Fatima called the GM at Location 7. Everything seemed normal from his perspective - the dining room was moderately busy, the kitchen was operating, no staff call-outs. But when he checked the online order queue, it was empty. Zero delivery and pickup orders since 11:30am. On a Friday, when online orders typically represented 40% of lunch revenue.
Investigation revealed the kitchen printer had jammed at 11:28am. The POS was still receiving online orders, but the kitchen was not printing tickets. The delivery platforms' automated systems had escalated from "delayed" to "auto-cancel" after 25 minutes without preparation confirmation. By 2:15pm, approximately 35 online orders had been auto-cancelled - representing roughly AED 3,400 in lost revenue plus the reputational damage of 35 cancelled orders hitting the restaurant's platform ratings.
The GM replaced the printer paper roll (the actual issue - not a hardware failure), and online orders resumed within 10 minutes. The revenue gap for that shift could not be fully recovered, but the alert limited the damage to 2.5 hours instead of the full 5-hour afternoon shift. Without Pulse, the problem would have been discovered at daily close - 7 hours after it started - by which point the delivery platform rating damage would have been significantly worse.
This is not a hypothetical. This is the kind of operational failure that happens across restaurant groups every week. Equipment failures, system glitches, staff no-shows, supplier delays - the question is not whether these events occur but how quickly you detect and respond to them. Pulse exists to close that detection gap from hours to minutes.
Why Real-Time Matters in Restaurant Operations
Restaurant operations are perishable. A manufacturing plant that detects a quality issue can recall products. An e-commerce company that spots a conversion drop can revert a code change. A restaurant that discovers a bad lunch shift at the end of the day cannot go back and serve those guests differently. The revenue is gone. The reviews are posted. The delivery platform rankings are adjusted.
This perishability creates an asymmetric value equation for real-time monitoring: the cost of early detection is minimal (an alert, a phone call, a quick investigation), while the cost of late detection compounds with every hour. A kitchen printer down for 30 minutes costs AED 1,200 in cancelled orders. The same printer down for 5 hours costs AED 8,000 in cancelled orders plus a delivery platform rating drop that suppresses future order volume for weeks.
Traditional restaurant reporting operates on a daily close cycle. Revenue is reconciled at end of day, variances are identified next morning, corrective action happens 12-24 hours after the problem began. For slow-moving trends (food cost creep, gradual labor drift), daily reporting is adequate. For acute operational failures - the events that cause immediate, compounding revenue loss - daily reporting is catastrophically slow.
Pulse bridges this gap. It operates on a continuous monitoring cycle, tracking revenue pace, operational metrics, and anomaly indicators in real-time across every location. When something deviates from expected patterns, the alert fires within minutes - not hours, not the next morning.
The Six Sub-Modules of Pulse
Pulse is not a single dashboard. It is a command center composed of six interconnected sub-modules, each serving a specific operational monitoring function.
1. Overview Dashboard
The overview is the command center's home screen - a single view showing the real-time operational status of every location in your portfolio. Designed for the operator who needs to answer "how are we doing right now?" in under 10 seconds.
Key elements:
Portfolio health indicator: A traffic light system showing how many locations are performing above target (green), within acceptable range (amber), or below threshold (red). At a glance, you see whether the portfolio needs attention or is running smoothly.
Revenue pace by location: Current-hour and current-shift revenue compared to the same period's historical average and target. Each location shows its pace as a percentage - "Location 3 is at 112% of target pace" or "Location 9 is at 74% of target pace."
Active alerts counter: How many unresolved alerts exist across the portfolio, categorized by severity (critical, warning, informational).
Today vs yesterday vs same-day-last-week: Quick comparison showing whether today's trajectory is improving, declining, or stable relative to recent benchmarks.
The overview dashboard is designed for two user profiles: the executive who checks once per hour for a pulse (hence the name) on portfolio health, and the operations manager who keeps it open all day as a real-time monitoring screen.
2. Shift Tracker
Restaurants operate in shifts, and shift boundaries are where accountability lives. The shift tracker monitors performance within the current shift and provides shift-over-shift comparison:
Current shift progress: How far through the shift are we (by time), and how far through the revenue target have we progressed? A shift that is 60% complete in time but only 40% through its revenue target is trending toward a miss - and the earlier that is visible, the more options exist for course correction.
Shift comparison: This shift vs the same shift last week, same shift last month, and the trailing 4-week average for the same shift. Context that tells you whether a slow Tuesday lunch is concerning (it is usually busier) or normal (Tuesday lunch is always slow).
Covers and average check: Real-time tracking of guest count and average transaction value. A shift that is hitting revenue target through higher average check despite lower guest count tells a different operational story than one that is hitting target through volume.
Shift handoff intelligence: When one shift ends and another begins, Pulse generates a handoff summary: what happened, what is in progress, what needs attention. The closing manager's knowledge transfers to the opening manager automatically - no sticky notes, no verbal handoffs that get lost.
3. Alerts Engine
The alerts engine is Pulse's nervous system. It continuously monitors operational data streams against expected patterns and fires notifications when deviations exceed configured thresholds.
Alert categories:
Revenue anomalies: Revenue pace falls below target threshold. Configurable by location, shift, and day-of-week. A 20% deviation at a location that normally operates within 5% of target triggers a different urgency level than the same deviation at a location with high natural variance.
Void pattern alerts: Unusual void activity by volume, value, or timing. A sudden spike in voids during a specific shift or by a specific cashier triggers investigation. This overlaps with revenue assurance but operates in real-time rather than the end-of-day analysis that revenue assurance provides.
Labor spike detection: Actual labor hours or labor cost exceeding the shift plan by more than a configured threshold. This catches situations where extra staff were called in without authorization, overtime is accumulating unexpectedly, or scheduled staff are clocking in early or out late.
Speed of service alerts: Average ticket time exceeding acceptable thresholds. When the kitchen is backing up and average ticket time extends from 12 minutes to 22 minutes, the guest experience is degrading - and delivery platform algorithms are adjusting rankings downward in real-time.
Online order disruptions: Drop in online order volume relative to expected patterns. This is what caught Fatima's kitchen printer issue - the absence of expected orders is as significant a signal as the presence of unexpected problems.
Each alert includes three components: what happened (the metric and deviation), context (historical comparison and possible causes), and suggested action (what to investigate first). Alerts are not just alarms - they are starting points for operational response.
4. Live KPIs
Live KPIs provide continuously updating key performance indicators that refresh on a sub-hour cycle. Unlike the overview dashboard (which shows summary status), live KPIs show the actual numbers in real-time:
- Revenue: Current hour, current shift, current day - actual vs target
- Transactions: Count, average value, payment method mix
- Labor: Staff on floor, labor cost accruing, labor-to-revenue ratio for current shift
- Speed: Average ticket time, orders in queue, kitchen throughput
- Delivery: Orders by platform, acceptance rate, average delivery time
- Guest flow: Covers per hour, table turn time, wait list depth
Live KPIs are designed for the GM who manages by the numbers - the operator who wants to see AED 4,287 in current-hour revenue rather than a green traffic light. Both views are valid; live KPIs serve the detail-oriented operator while the overview serves the big-picture executive.
5. Exception Monitoring
Exception monitoring goes beyond alerts to track operational events that individually may not trigger notifications but collectively reveal patterns:
Discount clustering: Multiple discounts applied in rapid succession, suggesting a systematic discount application rather than individual guest situations.
Refund patterns: Refund frequency and timing that deviates from normal - potentially indicating a process issue or a quality problem generating guest complaints.
Payment anomalies: Unusual payment method distributions (sudden increase in cash transactions, multiple split payments) that may indicate system issues or require investigation.
Inventory movements: Unexpected inventory adjustments, waste entries, or transfer requests that fall outside normal patterns.
Clock-in/clock-out anomalies: Staff clocking in significantly before or after scheduled times, buddy punching indicators, or missed clock-outs.
Exception monitoring is the module that finds the problems nobody is looking for. Individual exceptions are noise. Patterns of exceptions are signals. Pulse's exception monitoring separates the two by tracking exception frequency, clustering, and correlation over time.
6. Operational Scorecards
Scorecards translate real-time data into shift-end and day-end performance evaluations. When a shift ends, Pulse automatically generates a scorecard that rates performance across key dimensions:
- Revenue attainment: Actual vs target, with context on guest volume vs average check contribution
- Labor efficiency: Actual labor cost vs plan, with breakdown of variance drivers
- Speed of service: Average ticket time vs target, with peak-hour detail
- Guest satisfaction signals: Real-time review scores, complaint frequency, return visit indicators
- Operational compliance: Exception count, void rate, discount rate vs policy thresholds
Scorecards serve two purposes: immediate feedback (how did this shift go?) and longitudinal tracking (how is this location's lunch shift trending over the past 30 days?). The combination enables both tactical response and strategic pattern recognition.
The Draft/Publish Model
Pulse configuration follows a draft/publish model that prevents accidental changes to live monitoring:
Draft mode: All configuration changes (alert thresholds, KPI targets, scorecard weights, notification routing) are made in draft mode. Changes are visible only to the user making them and do not affect live monitoring.
Review: Before publishing, changes can be reviewed by a second user (typically the operations director or regional manager) to ensure thresholds are appropriate and notification routing is correct.
Publish: Publishing applies the draft configuration to live monitoring. The previous configuration is retained as a rollback point in case the new settings generate too many false alerts or miss genuine issues.
This model is essential for multi-location groups where a single misconfigured alert threshold could flood the operations team with false positives across 40 locations. The draft/publish cycle ensures that configuration changes are deliberate and reviewed.
Real-Time Intelligence in Practice
The value of Pulse compounds with scale. A 3-location operator can keep a mental model of each location's performance through direct observation and phone calls. A 15-location operator cannot. A 40-location operator definitely cannot.
At scale, the math of real-time monitoring becomes compelling:
Detection speed: Average time from operational issue to detection drops from 4-8 hours (daily close review) to 15-45 minutes (Pulse alert). For revenue-impacting issues, this typically represents 60-80% reduction in lost revenue per incident.
Response quality: Alerts with context (historical comparison, possible causes, suggested actions) produce faster and more effective responses than raw anomaly detection. Operators spend less time diagnosing and more time resolving.
Pattern prevention: Exception monitoring catches recurring patterns before they become embedded operational habits. A cashier who applies unauthorized discounts three times in a week is a coaching opportunity. The same behavior undetected for three months is an embedded loss.
Shift accountability: Scorecards create a feedback loop that did not exist with end-of-day reporting. Shift managers see their performance measured and compared - not as punishment, but as the same kind of performance tracking that every other industry considers standard.
Configuring Pulse for Your Operation
Pulse's effectiveness depends on calibration. Alert thresholds set too tight generate alert fatigue. Thresholds set too loose miss genuine issues. The calibration process follows three phases:
Phase 1: Observation (Week 1-2). Run Pulse in monitoring-only mode with default thresholds. Observe what alerts would have fired based on historical data. Identify false positives and missed events.
Phase 2: Calibration (Week 3-4). Adjust thresholds based on observation data. Set location-specific thresholds where natural variance differs (a food court location has different revenue variance than a standalone restaurant). Configure notification routing so that the right alerts reach the right people.
Phase 3: Optimization (Ongoing). Continuously refine thresholds based on alert accuracy. Track false positive rate and missed event rate. The goal is a system where every alert that fires represents a genuine operational situation that warrants attention - and every genuine situation generates an alert.
Closing
Restaurant operations are real-time. Your reporting should be too. Daily close reports are necessary for accounting and reconciliation - but they are insufficient for operational management. By the time yesterday's numbers tell you about a problem, today's shift is already halfway over.
Pulse does not replace daily, weekly, or monthly reporting. It adds the real-time layer that catches the acute issues - the kitchen printer jams, the sudden revenue drops, the labor spikes, the void anomalies - before they compound into shift-killing, day-ruining, rating-destroying problems.
Fatima's 2:15pm alert did not just save AED 3,400 in that single Friday shift. It saved the delivery platform rating that drives AED 40,000+ in weekly online orders at that location. The ROI of real-time monitoring is not the individual alert - it is the cascade of consequences that the alert prevents.
Book a demo to see Pulse running on live restaurant data - and experience the difference between knowing what happened yesterday and knowing what is happening right now.