The Miami-based startup company, Subquadratic claims to have built the first large language model with fully subquadratic scaling, where compute requirements increase linearly with context length instead of growing exponentially. If independently validated, the breakthrough could significantly reduce the cost of processing vast amounts of information with AI.
Its first model, SubQ 1M-Preview, reportedly cuts attention compute by nearly 1,000 times at 12 million tokens compared with conventional transformer architectures. The company also introduced SubQ Code, a coding product; SubQ Search, a search tool, and an API, currently available through private beta.
The launch quickly gained traction across the AI ecosystem. According to co-founder and CEO Justin Dangel, the announcement drew more than 12 million views on X and over 30,000 waitlist signups within 24 hours.
“The entire LLM world is built on transformers,” Dangel told a news source. “The architecture has a limitation. When you’re processing paragraphs and paragraphs of information, it becomes too expensive to handle large amounts of data at once, even at the frontier level.”
That constraint has heavily influenced how AI applications are built today. Instead of feeding entire datasets directly into models, developers often rely on retrieval systems, vector databases, prompt engineering, chunking, and orchestration layers to filter information before it reaches the model.
Subquadratic feels that much of this added complexity exists because existing architectures cannot efficiently handle long-context processing.
The company’s approach, called Sparse Subquadratic Attention, selectively focuses only on the token comparisons that matter rather than computing attention across every possible relationship. In theory, that dramatically lowers computational overhead while still preserving retrieval quality across extremely large contexts.
The company says the architecture allows it to expand context windows while operating at significantly lower cost. “It allows us to operate much less expensively,” Dangel said.
Still, the announcement has sparked intense debate in the AI research community.
While some researchers view the work as potentially groundbreaking, others remain skeptical about whether the claims will stand up to scrutiny.
Dangel said he understands the reaction. “Extraordinary claims will often be greeted rightly with skepticism,” he asserted. “The fact that our company has a potentially industry-disrupting innovation, I’m not surprised by the reaction.”
He added that the company plans to release additional technical papers and products in the coming months. “We look forward to releasing products and papers,” he said. “I hope the community will be satisfied.”
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.




