Inside Enduring Planet's edge-first data layer rebuild
How Denver's climate-tech startup rebuilt their carbon measurement platform around edge databases to handle IoT sensors across mining operations worldwide.
Inside Enduring Planet's edge-first data layer rebuild
When Enduring Planet's engineering team realized their carbon measurement platform was dropping 15% of sensor readings from remote mining sites, they faced a choice: patch their cloud-centric database architecture with more retries and buffering, or rebuild from the ground up.
The Denver-based climate-tech company, which helps mining and energy companies track emissions in real-time, chose the latter. Over eight months in 2025, they migrated from a traditional cloud-first data layer to an edge-first architecture that processes sensor data locally before syncing to central systems.
"We had IoT sensors deployed across copper mines in Chile, coal operations in Wyoming, and natural gas sites throughout the Rockies," explains Sarah Chen, Enduring Planet's head of platform engineering. "When connectivity drops for six hours in a remote location, you can't just lose that environmental data. The compliance implications alone would sink us."
The rebuild touches on broader shifts happening across Denver's tech scene. As the city's aerospace companies push more compute to satellites and edge devices, and energy-tech startups deploy hardware in remote locations, the old model of streaming everything to AWS or Azure is showing its limits.
The connectivity reality check
Enduring Planet's original architecture followed the playbook most Denver startups adopt: React frontend, Node.js API layer, PostgreSQL on RDS, with Redis for caching. Sensors at mining sites would POST readings every 30 seconds to their cloud API, which would validate and store the data for later analysis.
The system worked fine during development and early pilots around Colorado's Front Range. But as they scaled to remote locations – a copper mine 200 miles from Santiago, a fracking site in North Dakota's Bakken formation – the data loss became impossible to ignore.
"Mining sites aren't exactly known for their fiber infrastructure," Chen notes. "You might have satellite internet that works 80% of the time, or cellular that drops whenever there's weather. Meanwhile, our carbon accounting models need every reading to maintain accuracy."
The team initially tried standard cloud-native solutions: message queues, retry logic, local buffering in their IoT devices. But the fundamental issue remained – they were treating the edge as a dumb data collector rather than a capable computing environment.
Building intelligence at the edge
The breakthrough came from an unexpected source: a conversation with engineers from Lockheed Martin's satellite division at a Denver tech meetups event focused on distributed systems. The aerospace team described how modern satellites handle intermittent communication with ground stations by maintaining sophisticated local state.
"They talked about satellites that might lose contact with Earth for hours at a time, but still need to make autonomous decisions about trajectory corrections or scientific measurements," Chen recalls. "That's when we realized we were thinking about our mining sites all wrong."
Enduring Planet's new architecture treats each mining site as an autonomous computing node. Instead of simple sensor collectors, they deploy edge servers running SQLite databases that can:
- Store weeks of sensor readings locally
- Run real-time carbon calculations and alerts
- Detect anomalous readings that might indicate equipment failure
- Compress and batch data for efficient uploads when connectivity returns
The edge servers run a custom synchronization protocol that reconciles local data with the central PostgreSQL cluster when connections are available. Unlike simple backup-and-restore approaches, the system handles conflicts intelligently – if a sensor reading was flagged as suspicious locally but later validated by technicians, that context travels with the data.
The synchronization challenge
Building the sync layer proved more complex than the team anticipated. Traditional database replication assumes reliable connections between nodes. Enduring Planet needed something that could handle network partitions lasting days.
"We looked at CRDTs [Conflict-free Replicated Data Types] and eventually settled on a hybrid approach," says Chen. "Most sensor readings are append-only with timestamps, so conflicts are rare. But when technicians override readings or recalibrate sensors, we need human-readable resolution rules."
The team built a custom consensus layer that treats the central cloud database as the source of truth, but allows edge nodes to make local decisions during partitions. When connectivity returns, a reconciliation process identifies conflicts and applies business-logic rules to resolve them.
For example, if an edge node flags a CO2 reading as anomalous but the cloud system later receives calibration data that validates the reading, the edge node updates its local models to prevent similar false positives.
Numbers that matter
Six months after completing the migration, Enduring Planet's data reliability improved dramatically:
- Data loss dropped from 15% to less than 0.3%
- Alert response time at remote sites decreased from 4-6 minutes to under 30 seconds
- Bandwidth usage fell by 40% due to local processing and compression
- Customer SLA compliance increased from 87% to 99.2%
The edge-first approach also enabled new features that weren't possible with the cloud-centric model. Real-time carbon accounting alerts now trigger within seconds of detecting emission spikes, rather than waiting for data to round-trip through the cloud.
"Our customers started asking for capabilities we couldn't deliver with the old architecture," Chen explains. "Like automated equipment shutdown when emissions exceed thresholds. You can't do that with a 30-second delay to the cloud and back."
What Denver teams can learn
Enduring Planet's experience reflects broader trends in how Denver's tech companies handle distributed data. The city's unique mix of industries – aerospace, energy, outdoor equipment – often requires deploying technology in environments where cloud connectivity isn't guaranteed.
"The traditional cloud-first approach works great for SaaS companies serving urban users with good internet," notes Chen. "But Denver has this concentration of companies dealing with physical infrastructure in remote locations. Edge-first architectures are becoming table stakes."
Several patterns from Enduring Planet's rebuild apply across industries:
- Treat edge nodes as capable computers, not dumb sensors. Local processing enables real-time decisions and reduces dependency on connectivity.
- Design for partition tolerance from day one. Don't treat network failures as exceptional cases – plan for them as normal operating conditions.
- Build conflict resolution into your data model. Edge-first systems need clear rules for handling data that diverges during network partitions.
- Optimize for eventual consistency, not immediate consistency. Many business use cases can tolerate temporary inconsistency in exchange for local responsiveness.
For Denver developer groups exploring similar architectures, Chen recommends starting small: "Pick one use case where local processing provides clear value, build that well, then expand. Don't try to rebuild your entire data layer at once."
FAQ
What database technologies work best for edge deployments?
Enduring Planet uses SQLite for edge nodes due to its reliability and zero-administration requirements, with PostgreSQL for the central cluster. Other teams are exploring embedded databases like DuckDB for analytical workloads, or distributed databases like CockroachDB that handle multi-region consistency automatically.
How do you handle schema changes across distributed edge nodes?
Schema migrations require careful planning in edge-first systems. Enduring Planet deploys backward-compatible changes first, ensures all edge nodes can handle both old and new data formats, then gradually migrates. The process takes longer than centralized deployments but avoids breaking remote sites during connectivity outages.
What's the total cost comparison between edge-first and cloud-centric approaches?
While edge hardware adds upfront costs, Enduring Planet reduced their cloud infrastructure spending by 35% through local processing and data compression. The bigger savings come from improved SLA compliance and reduced operational overhead from investigating data loss issues.
Exploring distributed systems architecture? Connect with Denver's growing community of platform engineers and infrastructure specialists. Find Your Community