# API Integration

When integrating your store with the QL platform, there are some design considerations that need to be taken into account, depending on your particular use-case, and the capabilities of our platform.

# Design Considerations

The QL platform is designed to provide a continuous stream of price recommendations, based on the information you provide it with. As such, some assumptions were made when designing the internals of our platform.

# Data Ownership and Responsibility

Each client is different and QL as a platform tries to accommodate the needs and requirements of each and every use-case as generically and flexibly as possible.

To that end, we assume that you, our client, know best about the current state of each of your products, and that it is your responsibility to tell QL about changes to the products you want QL to work with.

When it comes to the question of who should hold the most up-to-date information about your store and product catalog, QL makes the following distinction:

  • Clients are responsible for any product-related data, including but not limited to:
    • Shelf prices
    • Cost and inventory
    • Product attributes
    • Enabled / Disabled state
  • QL is responsible for any pricing-related data such as:
    • Price recommendations
    • Pricing rules
    • Competitor prices

# Data Ingestion

The core of the QL platform is a pricing engine, that continuously generates new pricing recommendations on the current set of products available to it. As such, it was designed to process a continuous stream of pricing recommendations, rather than serve as a product catalog.

This has several real-life implications on the way clients should integrate with the QL platform.

# Reading Data

Iterating over large result-sets via our API is limited to 100K items per any given API query. This applies to the following API endpoints:

  • /api/v3/recommendations/accepted
  • /api/v3/recommendations/all
  • /api/v3/products/enabled
  • /api/v3/products/disabled

With the above API endpoints, even if the total number of items available in our system is greater than 100K, you can only fetch the first 100K results.

# Iterating Over Data

With any of the above endpoints, our API will return by default the first 50 results available. If you want to access the entire result set, you should implement an iteration mechanism that loops through the available result set and fetches items in small chunks. This can be done either using pagination or absolute timeframes.

The advantage of using absolute timeframes, is that it provides higher accuracy in time-sensitive queries, such as accepted recommendations.

Pagination is usually suitable for smaller batches, or for queries that are naturally ordered by ID, rather than by time.

# Paginating Over Data

For the /api/v3/products/enabled and /api/v3/products/disabled, we recommend you implement a pagination mechanism as shown in the following example.

require 'json'
require 'ql-api'
# replace API_KEY and API_SECRET with your Quicklizard API Credentials
client = QL::Api::Client.new(API_KEY, API_SECRET)
page = 1
per_page = 50
total_pages = 1
while page <= total_pages do
  url = "/api/v3/products/enabled?page=#{page}&per_page=#{per_page}"
  response = client.get(url)
  data = JSON.parse(response)
  # calculate total pages
  total_pages = (data['total'] / per_page) + 1
  data['result'].each do |item|
    ## ... do something with result
  end
  page += 1
end

When paginating over data, you can tell our API how many items you want for each iteration using the per_page query-string parameter, and at what offset to start fetching items using the page query-string parameter.

# Fetching Data in Chunks

Due to the limitations of the /api/v3/recommendations/all and /api/v3/recommendations/accepted when it comes to reading large data sets, our API provides an endpoint that supports reading recommendations as a log of discrete events - /api/v3/recommendations/log

We strongly recommend that you use the /api/v3/recommendations/log to fetch price recommendations, as shown in the example below:

require 'json'
require 'time'
require 'ql-api'
# replace API_KEY and API_SECRET with your Quicklizard API Credentials
client = QL::Api::Client.new(API_KEY, API_SECRET)
last_id = ""
# replace YOUR_CLIENT_KEY with the actual client_key you're making API calls for
client_key = "YOUR_CLIENT_KEY"
channel = "default" # replace with channel name in omni-channel

while true
  since = (Time.now.utc - 60 * 5).strftime('%Y-%m-%dT%H:%M:%SZ') # 5 minutes ago formatted in ISO 
  params = "client_key=#{client_key}&channel=#{channel}&since=#{since}&last_id=#{last_id}"
  url = "/api/v3/recommendations/log?#{params}"
  puts "loading data from #{url}"
  response = client.get(url)
  data = JSON.parse(response.body)
  puts "total: #{data['total']} | current batch: #{data['result'].size}"
  data['result'].each do |item|
    # do something with each result item
  end
  if data["result"].length < 200 || data["last_id"] == ''
    break
  else
    last_id = data["last_id"]
  end
end

The above example fetches recommendations that were generated 5 minutes ago, in chunks of 200 items per request, until no more items are available.

The since parameter determines the start date from which to load recommendations. We recommend that you fetch recommendations frequently and in small batches.

The last_id parameter serves as a result cursor for the next iteration. For example, if we are fetching recommendations from the last 15 minutes, and we have 1500 results, we fetch the first 200 results (ordered by ascending timestamp), and then we set the ID of the last item to the last_id parameter, which tells the API to fetch results after that ID.

We strongly recommend that you fetch recommendations from the /api/v3/recommendations/log API frequently. Ideally you should be making API calls every 5-10 minutes fetching results for that period plus a small overlap. For example, if you fetch results every 5 minutes, you can add a 1 minute overlap to the since parameter and set it to 6 minutes ago.

# Writing Data

Sending updates to QL on new and existing products works best in small batches. This allows QL to quickly process the current batch of products, and provide up-to-date recommendations as soon as they become available. Sending large batches of updates, or many small batches at once, will simply queue up the update process until our internal systems are ready to handle it.

To that end, we recommend that you send updates to QL as soon as they become available on your system.

For example, lets assume you have a catalog of 100K products that you want QL to price. At 10:05am, you changed the inventory level for 50 of these products in your ERP. Rather than wait and send an update to QL on your entire catalog, you should send the updated inventory level to QL as soon as it was changed in your ERP.

QL will generate new recommendations for your products as soon as it receives this update from you, and you can then ingest new recommendations back into your ERP once they were accepted by you.

# Summary

To summarize, QL, as a continuous pricing engine, was designed to ingest small batches of data, as soon as it becomes available, rather than in large, daily or weekly batches.

When writing data to QL, you should use our API to send updates from your ERP as soon as it becomes available, rather than wait for a daily or weekly data-sync process.

When reading data from QL, you should account for the 100K items limitation, and read data using our REST API in short intervals.

# Daily, Weekly & Large Data Updates

In some cases, clients prefer to sync with QL once a day, or in a non-continuous manner, to better support their internal batch processes.

To do so, we recommend that you implement some sort of an intermediary caching or persistence layer between your system and QL - Usually a small DB or a set of files.

Let's see how this works with an actual use-case.

# Daily recommendations update from QL into ERP

In this example, a client has a catalog of 150K products, that are priced on a daily basis in QL. This client has a daily batch job that reads accepted recommendations from QL and updates their internal ERP system.

Since QL's REST API only supports up to 100K items per API query, calling the /api/v3/recommendations/accepted API endpoint once a day might not return all available recommendations.

To address this limitation, the client sets up a sync process as follows:

  1. Read available recommendations from the /api/v3/recommendations/accepted endpoint every 15 minutes.
  2. Save results from the /api/v3/recommendations/accepted API call to a DB.
  3. Run a daily batch job that reads recommendations from the DB, and syncs them into the ERP.

Choosing a DB

We realize that integrating a DB into your already-existing flow might complicate things.

If you still want to use a DB, but avoid the overhead of setting up an actual server, we recommend you look into using SQLite - an open-source, embedded, file-based relational database that doesn't require any external setup.

# Should I use a DB or a file?

In some cases, using a DB as an intermediary caching or persistence layer can complicate things. If that's the case, you can replace the DB layer with a file, to which you append recommendations as they become available, and then sync them into your ERP. If you opt for this kind of integration, please be mindful of duplicate entries and how you handle them.