
A solo developer built a comprehensive business intelligence platform connecting a legacy ERP database to a modern Next.js dashboard — spanning a full CRM with Kanban boards, purchase-vs-sales immobilization analysis, weekly treasury commitments, cable price comparison tools, and a complete storage migration from JSON to SQLite. What started as simple SQL query visualization evolved into a 20+ page enterprise analytics suite serving an electrical equipment distributor's entire operations team across sales, purchasing, treasury, and management.
Builder of firstcommit.io. I ship with AI and share the journey.
8 stages to follow
Paste this in your AI coding tool to build your own version, guided by the approach in this guide.
Build the project from this First Commit guide: 758a9f78-3159-4a1d-874f-ff177c9b33a4No source repo linked — follow the guide stages to build from scratch.
The project started by building a modular query engine that could connect to the legacy ERP's Firebird database. Each business intelligence query was made self-contained: its own SQL template file with parameter placeholders, a TypeScript module that registers itself with the system, and typed interfaces for results. The API reads SQL at runtime, performs string replacement for parameters like date ranges and company IDs, then executes against the database. A dynamic frontend generates forms and routing based on query metadata, organized into categories like sales, purchases, inventory, and sales reps. This architecture was critical because it meant any new analysis could be added by creating a folder with no API route changes needed. The initial dashboard presented these as a flat grid of category cards, which later evolved into grouped sections.
~40 messagesThe purchasing department needed to understand what they were buying versus what was actually selling, so a comprehensive 5-tab analysis dashboard was built. The first challenge was dirty data — payroll entries and internal transfers were mixed in with product data, so a hideable products system was created where items could be filtered out and the hidden list persisted, ordered by the size of the purchase-sale gap. Next came the immobilization calculation: rather than checking if something sold in a single month, a configurable rolling window (defaulting to 3 months) tracked whether purchased products had any corresponding sales. The temporal evolution tab showed immobilization trends over time using Chart.js. A critical discovery was that the same physical product often appeared under different codes across brands — a product equivalence mapper was built to let users group equivalent items and set unit conversion factors (e.g., 1 box = 100 meters of cable). The family grouping tab aggregated data by product hierarchy. Charts would break when users switched date filters rapidly due to stale React state, which required careful cleanup of chart instances.
~130 messagesA full CRM was built from scratch since the company's sales team had no opportunity tracking. The core is a Kanban board with 6+ stages from initial contact through negotiation to approved/rejected closure, plus a later closing-in-progress stage added at stakeholder request. Each opportunity stores flexible metadata including Trello-style colored labels for tagging and private reminders visible only to the assigned seller. The Approved This Month KPI was particularly tricky — it needed to use actual invoice dates rather than when an opportunity was moved to the approved column, and opportunities sitting in closing-in-progress for more than two weeks shouldn't count. Stage history tracking was added to know exactly when each opportunity moved between stages, with backward compatibility for older opportunities that didn't have history. A statistics page shows pipeline analytics with time-per-stage calculations. A complete visual redesign added dark/light mode with corporate branding. The list view got hideable columns with per-user persistence. A particularly interesting design decision came from human psychology: when management could see all quotations, sellers felt pressured. The cancellation metric was redesigned so cancelling an opportunity doesn't penalize the seller — the metric tracks creation volume instead, assuming a natural cancellation rate.
~200 messagesA treasury section was created with three main tools: a cash flow calendar, a weekly payment commitments page, and a tax position page. The weekly commitments page was the most complex, aggregating three very different data sources into a unified view. Company-issued checks came from a Google Spreadsheet rather than the database because the spreadsheet had better data quality. Invoices due required split payment logic — dividing amounts by the number of installments (up to 5 payment dates per invoice), matching the exact same calculation already used in the cash flow calendar. Payroll estimation was the trickiest: employee salary data was stored in the ERP as supplier invoices rather than in a dedicated payroll table, requiring creative filtering to identify which invoices were actually salaries. The estimation uses a 25th-to-25th date window from the previous month to predict current month obligations. Professional fees were separated from salaries using different document type filters. A stakeholder requested that all data appear on a single page without refreshing when switching between tabs.
~100 messagesTwo separate pages — client debt tracking and installer commission tracking — were merged into a unified collections page after realizing the same person used both. The commission tracking system allows editing individual balance amounts for partial payments (rounding to avoid floating-point display issues), and has a discard button for excluding entries. A percentage discount is automatically applied for certain payment terms before calculating the commission. The installer data source was completely rewritten: the original approach used a stored procedure with undocumented single-character flag parameters. After trying to reverse-engineer the flags, the decision was made to write direct SQL joins instead. A search filter was added for quickly finding specific clients. The default commission rate was set to 10% based on company policy.
~60 messagesThe company was comparing cable prices across suppliers using a massive Google Spreadsheet with 17 sheets. This was completely replaced with an in-app solution. The new tool supports importing supplier price lists from XLSX files, with each import creating a new page in the comparison system. Dead stock detection highlights products with no sales in 7+ months, pulling real stock levels from the ERP via automatic synchronization on page load. Column filters mimicking spreadsheet-style dropdowns were added to all table columns. A row selection feature lets users click multiple rows and see summed values above the table. Certification data tracking which products have electrical safety certifications was originally imported from Google Sheets and needed to be preserved when new XLSX imports happened. Sales rotation tracking shows monthly breakdown with the ability to filter and sum selected rows.
~120 messagesAn admin panel was built with user management including page-level access permissions — each user can be granted access to specific dashboard pages. The biggest infrastructure project was migrating all persistence from JSON files to SQLite. Early in the project, everything was saved to JSON: user configs, CRM data, chat history, recurring provider data, hidden product lists — 9+ distinct files. A storage migration admin page verifies each file's existence and record count, compares records between backends, and migrates everything with one button. A toggle switch lets operators flip between storage backends without data loss. Security hardening included CSRF protection, moving hardcoded identifiers to environment variables, fixing a logout redirect bug in production, and ensuring the database file was always gitignored.
~80 messagesThe AI chat feature was enhanced with a flexible query function that dynamically accesses all data sources across the platform, enabling cross-query analysis. The chat needed access to every query the dashboard could run, not just a subset — this was enforced after discovering coverage gaps. A data export system was built for all 29+ queries with dry run mode showing record counts before full export. The export UI was improved to show actual response data instead of generic success status, after discovering queries returning zero records were marked successful. Python scripts anonymized exported data for a portfolio-ready demo — preserving numbers and dates while replacing identifying information. A separate demo project was set up with anonymized data. Machine learning and regression analysis were explored but rejected as the data was too noisy for reliable predictions.
~95 messagesNo comments yet
Builder of firstcommit.io. I ship with AI and share the journey.
8 stages to follow
Paste this in your AI coding tool to build your own version, guided by the approach in this guide.
Build the project from this First Commit guide: 758a9f78-3159-4a1d-874f-ff177c9b33a4No source repo linked — follow the guide stages to build from scratch.







