# API Integration
When integrating your store with the QL platform, there are some design considerations that need to be taken into account, depending on your particular use-case, and the capabilities of our platform.
# Design Considerations
The QL platform is designed to provide a continuous stream of price recommendations, based on the information you provide it with. As such, some assumptions were made when designing the internals of our platform.
# Data Ownership and Responsibility
Each client is different and QL as a platform tries to accommodate the needs and requirements of each and every use-case as generically and flexibly as possible.
To that end, we assume that you, our client, know best about the current state of each of your products, and that it is your responsibility to tell QL about changes to the products you want QL to work with.
When it comes to the question of who should hold the most up-to-date information about your store and product catalog, QL makes the following distinction:
- Clients are responsible for any product-related data, including but not limited to:
- Shelf prices
- Cost and inventory
- Product attributes
- Enabled / Disabled state
- QL is responsible for any pricing-related data such as:
- Price recommendations
- Pricing rules
- Competitor prices
# Data Ingestion
The core of the QL platform is a pricing engine, that continuously generates new pricing recommendations on the current set of products available to it. As such, it was designed to process a continuous stream of pricing recommendations, rather than serve as a product catalog.
This has several real-life implications on the way clients should integrate with the QL platform.
# Reading Data
Iterating over large result-sets via our API is limited to 100K items per any given API query. This applies to the following API endpoints:
/api/v2/recommendations/accepted
/api/v2/recommendations/all
/api/v2/products/enabled
/api/v2/products/disabled
With the above API endpoints, even if the total number of items available in our system is greater than 100K, you can only paginate through the first 100K results.
# Iterating Over Data
With any of the above endpoints, our API will return by default the first 50 results available. If you want to access the entire result set, you should implement a pagination mechanism that paginates through the available result set and fetches it in small chunks.
You can tell our API how many items you want for each iteration using the per_page
query-string parameter,
and at what offset to start fetching items using the page
query-string parameter.
Here is an example in Ruby on how to iterate through API results using pagination
require 'json'
require 'ql-api'
# replace API_KEY and API_SECRET with your Quicklizard API Credentials
client = QL::Api::Client.new(API_KEY, API_SECRET)
page = 1
per_page = 50
total_pages = 1
while page <= per_page do
url = "/api/v2/products/enabled?page=#{page}&per_page=#{per_page}"
response = client.get(url)
data = JSON.parse(response)
# calculate total pages
total_pages = (data['total'] / per_page) + 1
data['result'].each do |item|
## ... do something with result
end
page += 1
end
# Writing Date
Sending updates to QL on new and existing products works best in small batches. This allows QL to quickly process the current batch of products, and provide up-to-date recommendations as soon as they become available. Sending large batches of updates, or many small batches at once, will simply queue up the update process until our internal systems are ready to handle it.
To that end, we recommend that you send updates to QL as soon as they become available on your system.
For example, lets assume you have a catalog of 100K products that you want QL to price. At 10:05am, you changed the inventory level for 50 of these products in your ERP. Rather than wait and send an update to QL on your entire catalog, you should send the updated inventory level to QL as soon as it was changed in your ERP.
QL will generate new recommendations for your products as soon as it receives this update from you, and you can then ingest new recommendations back into your ERP once they were accepted by you.
# Summary
To summarize, QL, as a continuous pricing engine, was designed to ingest small batches of data, as soon as it becomes available, rather than in large, daily or weekly batches.
When writing data to QL, you should use our API to send updates from your ERP as soon as it becomes available, rather than wait for a daily or weekly data-sync process.
When reading data from QL, you should account for the 100K items limitation, and read data using our REST API in short intervals.
# Daily, Weekly & Large Data Updates
In some cases, clients prefer to sync with QL once a day, or in a non-continuous manner, to better support their internal batch processes.
To do so, we recommend that you implement some sort of an intermediary caching or persistence layer between your system and QL - Usually a small DB or a set of files.
Let's see how this works with an actual use-case.
# Daily recommendations update from QL into ERP
In this example, a client has a catalog of 150K products, that are priced on a daily basis in QL. This client has a daily batch job that reads accepted recommendations from QL and updates their internal ERP system.
Since QL's REST API only supports up to 100K items per API query, calling the /api/v2/recommendations/accepted
API endpoint once a day might not return all available recommendations.
To address this limitation, the client sets up a sync process as follows:
- Read available recommendations from the
/api/v2/recommendations/accepted
endpoint every 15 minutes. - Save results from the
/api/v2/recommendations/accepted
API call to a DB. - Run a daily batch job that reads recommendations from the DB, and syncs them into the ERP.
Choosing a DB
We realize that integrating a DB into your already-existing flow might complicate things.
If you still want to use a DB, but avoid the overhead of setting up an actual server, we recommend you look into using SQLite - an open-source, embedded, file-based relational database that doesn't require any external setup.
# Should I use a DB or a file?
In some cases, using a DB as an intermediary caching or persistence layer can complicate things. If that's the case, you can replace the DB layer with a file, to which you append recommendations as they become available, and then sync them into your ERP. If you opt for this kind of integration, please be mindful of duplicate entries and how you handle them.