Skip to main content Skip to complementary content

Performance benchmarks

QV2QS batch conversion scales across multiple CPU cores. The benchmarks below measure end-to-end performance across a range of worker counts on a fixed dataset.

Test environment

The benchmark dataset contains 28 QlikView applications with 234 sheets and 5,377 objects. All PRJ folders were pre-generated before the benchmark runs. Each run deployed all 28 apps to Qlik Cloud with reload and automatic binary load enabled.

The test machine has 12 logical CPU cores. QV2QS uses a process pool for local conversion (true CPU parallelism) and a thread pool for cloud deployment (I/O-bound).

End-to-end results

Each column is a worker count. The rows break down the two sequential phases — local conversion and cloud deployment — that make up the end-to-end total.

                       1w       5w      10w    ▸ 11w    ▸ 15w      20w      25w      30w
                ────────────────────────────────────────────────────────────────────────

Conversion
  Elapsed          4m 28s   1m 30s   1m 22s   1m 15s   1m 06s   1m 11s   1m 11s   1m 13s
  CPU time         3m 25s   5m 12s   8m 01s   7m 13s   7m 41s   8m 58s  10m 09s  10m 18s

Cloud             19m 12s   4m 32s   3m 26s   3m 23s   3m 00s   3m 00s   3m 09s   3m 13s
                ────────────────────────────────────────────────────────────────────────
Total             23m 40s   6m 02s   4m 48s   4m 38s   4m 06s   4m 11s   4m 20s   4m 26s
Speedup              1.0x     3.9x     4.9x     5.1x     5.8x     5.7x     5.5x     5.3x

▸ 11w = Auto-detect default (cores − 1)
▸ 15w = Observed peak
  • Elapsed — Wall-clock time for local conversion: PRJ XML parsing, object graph traversal, and QS JSON generation.
  • CPU time — Cumulative CPU time consumed across all worker processes during conversion. CPU time grows with worker count because each additional process adds scheduling and memory overhead.
  • Cloud — Wall-clock time for cloud deployment: app creation, media upload, build, reload, and binary load.
  • Total — End-to-end wall-clock time (Conversion elapsed + Cloud).

Scaling behavior

  • Local conversion scales linearly up to the number of CPU cores. Going from 1 to 10 workers on a 12-core machine reduces conversion time by 70%.
  • Cloud deployment scales with thread count because the work is I/O-bound (network requests to the Qlik Cloud API). The cloud phase drops from 19 minutes at 1 worker to 3 minutes at 10 workers.
  • Beyond the CPU core count, additional workers add process scheduling overhead with no meaningful improvement. The 20-worker and 30-worker runs are slower than the 15-worker run.
  • Peak throughput occurs at approximately 15 workers on this 12-core machine (5.8x speedup). The auto-detect default of cores minus one (11 workers, 5.1x speedup) is within 12% of the peak without oversubscription penalty.
  • CPU time increases steadily with worker count. At 11 workers the conversion consumes 7m 13s of cumulative CPU time. At 30 workers the same work consumes 10m 18s — a 43% increase in CPU cost for no wall-clock improvement.
Information note

The cloud phase includes a reload using binary load for each app. Cloud times depend on app data volume, reload complexity, tenant load, and Qlik Cloud engine allocation. Larger applications or busier tenants will produce longer cloud times than shown here.

Cloud deployment concurrency

Qlik Cloud enforces a limit on the number of concurrent reloads per tenant. When batch deployment runs with a high worker count, reload requests that exceed the tenant limit queue server-side. The cloud phase speedup plateaus once the concurrent reload limit is reached, regardless of local CPU and thread resources.

For details on tenant-level limits, see Qlik Cloud resource limits. For the full specification of Qlik Cloud capacity guardrails, see Qlik Cloud specifications and capacity guardrails.

Learn more

Visit the discussion forum at community.qlik.com

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!