Within the cloud, products and offerings are constantly evolving. Knowing what those changes are and how they can benefit your business is important for optimizing your cloud landscape and more importantly, costs. Recently, a preview was made available for new improvements to BI Engine running in Google Cloud. This article explains those improvements, some high level test results and also some considerations to make when evaluating tools that improve latency and response times in BI Applications such as Looker, Power BI, Data Studio, etc.

What is BI Engine Reservation?
At a high level, BI Engine is an in-memory data layer that is closely integrated with BigQuery (BQ) and Google Cloud. It allows for a fast in-memory analysis service to serve data requests to client applications. It is smart because it allows the SQL results to be served (either partially or fully) from the BI Engine in-memory reservation, instead of re-running the query to hit the underlying BigQuery source files while also optimizing the query engine to use live data. Additionally, intelligently knowing which data can be served from the in-memory layer is a unique benefit. It also knows when the results are different from the underlying table or view, which triggers a refresh of the in-memory data, effectively using both a query optimizer and in-memory cache to speed up queries but only going to disk for data that is needed. This is important because in a dynamic data environment, the memory layer needs to be smart in order to ensure the data being provided to client application is fresh and not stale. Additionally, serving data from in-memory with BI Engine Reservation will minimize resource usage associated with BigQuery, including slot usage, bytes scanned and overall shuffle of data, which ultimately can lead to lower costs and latency. It’s technically not just a traditional cache, it is a service that enables data results to be processed from in-memory first, but also combines in-memory data with on-disk data which makes it notably unique in its ability to speed up performance.
What is in the Recent Preview?
With the recent BI Engine preview, the following changes are highlighted but the focus is on the additional integration with BI tools, such as Looker and/or other BI Platforms:
- Direct integration with the BQ API; which allows any client application to leverage the benefits of the BI Engine through the BQ API
- Vectorized runtime; this is essentially a more efficient way to utilize modern CPUs, with improvements in compression, encodings and batching.
- Metadata; This includes dictionary information about tables, views, schema definitions and permissions. It simply makes the BI engine more efficient by leveraging more of this metadata
- Reservations; Essentially, this is an additional way the reservation is managed at the project level.
- Distributed Engine; This allows for distributed memory stores, which allow for more efficient use of hardware and larger available memory reservations.
Validating Performance Improvements
With our high-level test, we ran a SELECT query as a control with a single JOIN that was run without the user cache and without a BI Engine Reservation. We ran the queries via 3 interfaces: the Looker UI, the Looker SQL Runner UI, and the BQ UI. Then, we re-ran the exact same query, in each interface, but with a 1gb BI Engine Reservation, being careful to log the 3rd query execution because that is most likely when the cache would kick in. Our data included approx 35m rows in one table and about 3m in the JOIN table. Next, we validated the usage of the BI Engine Reservation by using bq show on the query job id to determine if the feature was used. We were also careful to disable the BQ User cache because each user can have results cache and this could influence the results. Lastly, we attempted to validate our results by running different queries with different reservation sizing, but overall the results were pretty similar.


What we found was that for our first Query (Query 1), the BI Engine Reservation improved the query runtime by 61%, 68%, and 33%! These results were for the Looker UI, the Looker SQL Runner, and the BQ UI respectively. As a follow-up, we ran another test (Query 2), but this time on a table that had about 74m rows and a bigger cache (10gb). The results were similar, which had 69%, 73%, and 77% improvement in runtimes. Notably, this reflects our expectations that performance improvements should scale, that the cache should be sized appropriately to get the best benefit and that BigQuery really shines on larger data sets, not smaller.
In our case, a few observations stood out:
- A larger reservation size did not help the performance, but this is likely due to the smaller size of the data set we used. In use-cases with larger data sets, a larger reservation size will likely provide more linear improvements.
- We also did not analyze the actual explain plan in the Looker UI. Since the Looker cache was disabled at query runtime, we were confident that we isolated the workload enough and further optimization within looker was not needed.
- Lastly, with a larger more complex dashboard, or in a data warehouse type model (either snowflake or star schema), the BI Engine Reservation will help to keep dimensional tables in memory so that only facts need to be brought into memory and reducing shuffling, which will provide better benefits.
Overall Recommendations
As always, the answers to how these new improvements to the BI Engine Reservation offering benefits your business in Google Cloud always boil down to use-case. These may involve different BI platforms, data with different rates of change (dynamic vs static), or even large analytical queries. Additionally, our primary metric for testing was query run time, but we also noted a reduction of SlotsUsed and BytesProcessed in the BQ job log. These 2 translate in lower runtimes, but also lower concurrent slots and overall bytes scanned. Lastly, understanding which types of in-memory functionality (User Cache, BI Engine Reservation or Looker Cache) are in use and how BigQuery may utilize these in-memory structures will only help you to optimize and improve the insights that you can get from your data, as well as the cost at which you obtain them. So in order to optimize for BI Engine:
- Understand your use case – How much data is returned by each query? Is there a need to keep results fresh? Are there different filters or queries that may change our assumptions? Do you need to meet target SLA for dashboard performance?
- Understand the “churn” of your data – This means that in order to efficiently leverage the Looker cache or the BI Engine Reservation, we need to understand the rate of change. This will help us determine if the other types are more efficient and which “levers” can be used to improve both cost and performance.
- Consider the tooling – There may be additional in-memory improvements inside the BI tool you are using. With Looker, there is a cache that is applied based on timeframe. With Tableau, are you using an extract or a real-time query? Is there another in-memory structure in the architecture that can help serve requests?
- Consider how the data is organized – Is the underlying data in a view or a table? Is partitioning applied effectively? Are the query access patterns understood?
- Consider SLAs and Governance – Understand how your data platform performs with and without the BI in-memory reservation will help to fine-tune each layer, including any components outside of BQ. This is best done with additional dashboards and insights around how BQ is executing queries and if changes are called for with regards to the BI Reservation, table partitioning, use of views, etc.
Next Steps
Understanding your use-case and data platforms is important for optimizing your cloud implementation. We hope that these guidelines will help you to make improvements and leverage the latest Google Cloud product innovations. Contact us to connect with our experts and let us know how we can help.
DIGITAL TRANSFORMATION
Get the latest industry news and insights delivered straight to your inbox.