Wednesday, May 13, 2026
  • Login
  • Register
Technology Tutorials & Latest News | ByteBlock
  • Home
  • Tech News
  • Tech Tutorials
    • Networking
    • Computers
    • Mobile Devices & Tablets
    • Apps & Software
    • Cloud & Servers
    • IT Careers
    • AI
  • Reviews
  • Shop
    • Electronics & Gadgets
    • Apps & Software
    • Online Courses
    • Lifetime Subscription
No Result
View All Result
Tech Insight: Tutorials, Reviews & Latest News
No Result
View All Result
Home News Google

More than 100x Faster & Cheaper LLM-Powered SQL Queries with Proxy Models

May 13, 2026
in Google
0 0
0

Now, let’s enter the details of the particular proxy model, which is used by our current version: logistic regression. To visualize what is happening, think of embeddings as unit vectors forming a (hyper)sphere. For a binary classification task, the proxy model essentially cuts the sphere in two halves. In our example “aesthetic” and “thought-provoking plot” would fall on one side of the plane, whereas “underwhelming plot” and “boring movie” would be on the other side. Conceptually, the orientation of the plane determines which semantic notions are more relevant. 

Importantly, the proxy model is tuned for your data and your question: The training of the proxy used a high quality LLM to label a sample from your data for the particular question. 

Revisiting when Proxy Models Work

We can now see more clearly what distinguishes cases that proxy models work from cases they don’t: proxy models work well for prompts that can be decided by detecting semantic notions in the embedding space. They will fail for complex prompts that require forms of reasoning that go beyond detecting patterns in the embedding model.

The good news is that, in practice, we have observed that proxy models work for a large class of AI+SQL queries. The SIGMOD26 paper provides a comprehensive evaluation, showing that proxies worked in 11 benchmarks. Specifically, in 10 benchmarks the ratio of proxy F1 to LLM F1 ranged from 90% to 102% and in the 11th benchmark (Amazon Reviews) it was 116%. Notice that the proxy may even deliver better accuracy because it got the benefit of being trained by multiple samples as opposed to the LLM that addressed each row as a new problem.

There is a second limitation currently: extreme selectivities. Notice that Step 1 collects samples. It needs to collect many examples for TRUE and many examples for FALSE. Multiple sophisticated techniques are employed to achieve this, even when the TRUEs are many more than the FALSEs or vice versa. However, no purely sampling technique can confront cases of extreme selectivity, i.e., cases of very few TRUEs or very few FALSEs. This is the reason that the proxies will not be employed in such extreme selectivity cases. However, notice that this problem is fundamentally addressable by various techniques. 

Why isn’t Vector Search Enough?

Proxy models appear … suspiciously close to vector search. After all, they also input vector embeddings. Why not just vector search? There are two reasons why vector search is not enough: The obvious one is that proxies are not rankers; they are classifiers: multiclass classifiers (AI.CLASSIFY) or binary classifiers (AI.IF). But, even if you narrow down to just AI.IF, an attempt to simulate AI.IF with vector search will be both hard-to-setup and will give suboptimal results. While proxy models are tailored to your data and your prompts, vector search is based on generic distance functions (such as cosine). 

Experimental Results

We present here a subset of characteristic benchmarks from the SIGMOD26 paper. We compare the accuracy of proxy models with using LLM inference on all rows. In terms of quality, the relative accuracy varies from 0.92 (lowest) to 1.16 (highest), which means that for some tasks, proxy models perform slightly better than straight LLM inference. 

Dataset

Prompt

F1 (Proxy)

F1 (LLM)

Relative (Proxy/LLM)

Amazon Reviews 10k 

Review is {sentiment label}

0.860 

0.739 

1.163

Banking77 

Is intent {intent label}? Think step-by-step: {CoT instructions}

0.700 

0.707 

0.990

California Housing

Location in Latitude & Longitude belongs to Southern California

0.953 

0.953

1.0

FEVER

Is the claim supported by the text?

0.782 

0.853 

0.917

In terms of scalability and costs, the architectural differences between BigQuery and AlloyDB lead to slightly different results for each system. At a high-level, proxy models move parts of the computation from specialized hardware used by LLM inference services to ordinary database workers. This results in a large reduction in costs and in query latency. In the online training case, employed by BigQuery, for a typical one million row query, proxy models consume about 400x less tokens, and the latency goes down by 30x-100x. In AlloyDB’s case the LLM costs of PREPARE, which are similar to BigQuery’s, can be amortized over arbitrarily many runs of the prepared statements that invoke proxy models.

ShareTweetShare
Previous Post

How Glance turns hours of video into mobile-ready clips with AI

Next Post

Our continued commitment to Chromebooks, and looking ahead

Next Post

Our continued commitment to Chromebooks, and looking ahead

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

You might also like

Our continued commitment to Chromebooks, and looking ahead

May 13, 2026

More than 100x Faster & Cheaper LLM-Powered SQL Queries with Proxy Models

May 13, 2026

How Glance turns hours of video into mobile-ready clips with AI

May 13, 2026

Why cloud infrastructure is the foundation for digital health in 2026

May 13, 2026

Building safe and resilient transportation with AI

May 13, 2026

SAP SAPPHIRE 2026: The Future of Google Cloud AI Agents

May 13, 2026
monotone logo block byte

Stay ahead in the tech world with Tech Insight. Explore in-depth tutorials, unbiased reviews, and the latest news on gadgets, software, and innovations. Join our community of tech enthusiasts today!

Stay Connected

  • Home
  • Tech News
  • Tech Tutorials
  • Reviews
  • Shop
  • About Us
  • Privacy Policy
  • Terms & Conditions

© 2024 Byte Block - Tech Insight: Tutorials, Reviews & Latest News. Made By Huwa.

Welcome Back!

Sign In with Google
Sign In with Linked In
OR

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Sign Up with Google
Sign Up with Linked In
OR

Fill the forms below to register

*By registering into our website, you agree to the Terms & Conditions and Privacy Policy.
All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
  • Login
  • Sign Up
  • Cart
No Result
View All Result
  • Home
  • Tech News
  • Tech Tutorials
    • Networking
    • Computers
    • Mobile Devices & Tablets
    • Apps & Software
    • Cloud & Servers
    • IT Careers
    • AI
  • Reviews
  • Shop
    • Electronics & Gadgets
    • Apps & Software
    • Online Courses
    • Lifetime Subscription

© 2024 Byte Block - Tech Insight: Tutorials, Reviews & Latest News. Made By Huwa.

Login