Menglong Guo 1, Yu Sun 2 , 1 The Chinese University of Hong Kong, Ma Liu Shui, Hong Kong, 2 California State Polytechnic University, Pomona, CA 91768
This study presents a systematic comparison of LLM-powered database interfaces versus traditional SQL systemsfor inventory management, implementing two parallel Flask backends—a SQLite-based system using SQLAlchemyORM and an LLM-based system using DeepSeek to process natural language commands against JSON storage—with identical REST API endpoints enabling controlled comparison [10]. Experimental results reveal significanttrade-offs: the SQL backend achieved 12ms mean latency and 100% operational accuracy, while the LLM backendaveraged 1,850ms latency (154x slower) with 88% accuracy that degraded to 72% for complex multi-stepoperations. These findings demonstrate that while LLM-powered databases offer unprecedented query flexibility andnatural language accessibility, they currently incur substantial performance and reliability penalties; traditionalSQL systems remain superior for mission-critical applications requiring deterministic behavior and ACIDcompliance, while LLM approaches suit scenarios prioritizing user accessibility and dynamic query capabilitiesover guaranteed correctness and response speed.
Large Language Models, SQL Databases, Natural Language Interfaces, Inventory Management.
Copyright © CCNET 2026