Skip to main content
IBM Quantum Platform

Use Qiskit Code Assistant in local mode

Learn how to install, configure, and use any of Qiskit Code Assistant models on your local machine.

Notes
  • Qiskit Code Assistant is in preview release status and is subject to change.
  • If you have feedback or want to contact the developer team, use the Qiskit Slack Workspace channel or the related public GitHub repositories.

The easiest way to get started with Qiskit Code Assistant in local mode is to use the automated setup scripts for either the VS Code or JupyterLab extension. These scripts will automatically install Ollama to run the LLMs, download the recommended model, and configure the extension for you.

VS Code extension setup

Run the following command in your terminal:

bash <(curl -fsSL https://raw.githubusercontent.com/Qiskit/qiskit-code-assistant-vscode/main/setup_local.sh)

This script performs the following steps:

  • Install Ollama (if not already installed)
  • Download and configure the recommended Qiskit Code Assistant model
  • Set up the VS Code extension to work with your local deployment

JupyterLab extension setup

Run the following command in your terminal:

bash <(curl -fsSL https://raw.githubusercontent.com/Qiskit/qiskit-code-assistant-jupyterlab/main/setup_local.sh)

This script will:

  • Install Ollama (if not already installed)
  • Download and configure the recommended Qiskit Code Assistant model
  • Set up the JupyterLab extension to work with your local deployment

Available models

Current models

These are the latest recommended models for use with Qiskit Code Assistant:

  1. Qiskit/mistral-small-3.2-24b-qiskit - Released October 2025
  2. qiskit/qwen2.5-coder-14b-qiskit - Released June 2025
  3. qiskit/granite-3.3-8b-qiskit - Released June 2025
  4. qiskit/granite-3.2-8b-qiskit - Released June 2025

GGUF format models are optimized for local use and require fewer computational resources:

  1. mistral-small-3.2-24b-qiskit-GGUF – Released October 2025
    Trained with Qiskit data up to version 2.1

  2. qiskit/qwen2.5-coder-14b-qiskit-GGUF – Released June 2025
    Trained with Qiskit data up to version 2.0

  3. qiskit/granite-3.3-8b-qiskit-GGUF – Released June 2025
    Trained with Qiskit data up to version 2.0

  4. qiskit/granite-3.2-8b-qiskit-GGUF – Released June 2025
    Trained with Qiskit data up to version 2.0

The Open Source Qiskit Code Assistant models are available in Safetensors is a file format designed specifically for storing machine learning model weights and tensors in a secure and efficient manner. or GGUF is a binary format that is designed for quickly loading and saving models, and for readability. and can be downloaded from the Hugging Face as explained below.

Qiskit versions used for training

Model Benchmark Metrics Release dateTrained on Qiskit version
 QiskitHumanEval-HardQiskitHumanEvalHumanEval  
mistral-small-3.2-24b-qiskit20.5340.3977.49October 20252.1
qwen2.5-coder-14b-qiskit25.1649.0191.46June 20252.0
granite-3.3-8b-qiskit14.5627.1562.80June 20252.0
granite-3.2-8b-qiskit9.9324.5057.31June 20252.0
granite-8b-qiskit-rc-0.1015.8938.4159.76February 20251.3
granite-8b-qiskit17.8844.3753.66November 20241.2

Deprecated models

These models are no longer actively maintained but remain available:

  1. qiskit/granite-8b-qiskit-rc-0.10 - Released February 2025 (deprecated)
  2. qiskit/granite-8b-qiskit - Released November 2024 (deprecated)

Advanced setup

If you prefer to manually configure your local setup or need more control over the installation process, expand the sections below.

Download from the Hugging Face website

Follow these steps to download any Qiskit Code Assistant-related model from the Hugging Face website:

  1. Navigate to the desired Qiskit model page on Hugging Face.
  2. Go to the Files and Versions tab and download the safetensors or GGUF model files.
Download using the Hugging Face CLI

To download any of the available Qiskit Code Assistant models using the Hugging Face CLI, follow these steps:

  1. Install the Hugging Face CLI

  2. Log in to your Hugging Face account

    huggingface-cli login
    
  3. Download the model you prefer from the previous list

    huggingface-cli download <HF REPO NAME> <MODEL PATH> --local-dir <LOCAL PATH>
    
Manually deploy the Qiskit Code Assistant models in local through Ollama

There are multiple ways to deploy and interact with the downloaded Qiskit Code Assistant model. This guide demonstrates using Ollama as follows: either with the Ollama application by using the Hugging Face Hub integration or local model, or with the llama-cpp-python package.

Using the Ollama application

The Ollama application provides a simple solution to run the LLMs locally. It is easy to use, with a CLI that makes the whole setup process, model management, and interaction fairly straightforward. It’s ideal for quick experimentation and for users that want fewer technical details to handle.

Install Ollama

  1. Download the Ollama application

  2. Install the downloaded file

  3. Launch the installed Ollama application

    Info
    The application is running successfully when the Ollama icon appears in the desktop menu bar. You can also verify the service is running by going to http://localhost:11434/.
  4. Try Ollama in your terminal and start running models. For example:

    ollama run hf.co/Qiskit/Qwen2.5-Coder-14B-Qiskit
    

Set up Ollama using the Hugging Face Hub integration

The Ollama/Hugging Face Hub integration provides a way to interact with models hosted on the Hugging Face Hub without needing to create a new modelfile nor manually downloading the GGUF or safetensors files. The default template and params files are already included for the model on the Hugging Face Hub.

  1. Make sure the Ollama application is running.

  2. Go the desired model page, and copy the URL. For example, https://huggingface.co/Qiskit/Qwen2.5-Coder-14B-Qiskit-GGUF.

  3. From your terminal, run the command:

    ollama run hf.co/Qiskit/Qwen2.5-Coder-14B-Qiskit
    

You can use the hf.co/Qiskit/Qwen2.5-Coder-14B-Qiskit model or any of the other currently recommended GGUF official models hf.co/Qiskit/mistral-small-3.2-24b-qiskit-GGUF or hf.co/Qiskit/granite-3.3-8b-qiskit-GGUF.

Set up Ollama with a manually downloaded Qiskit Code Assistant GGUF model

If you have manually downloaded a GGUF model such as https://huggingface.co/Qiskit/Qwen2.5-Coder-14B-Qiskit-GGUF and you want to experiment with different templates and parameters, you can follow these steps to load it into your local Ollama application.

  1. Create a Modelfile entering the following content and be sure to update <PATH-TO-GGUF-FILE> to the actual path of your downloaded model.

    FROM <PATH-TO-GGUF-FILE>
    TEMPLATE """{{ if .System }}
    System:
    {{ .System }}
    
    {{ end }}{{ if .Prompt }}Question:
    {{ .Prompt }}
    
    {{ end }}Answer:
    ```python{{ .Response }}
    """
    
    PARAMETER stop "Question:"
    PARAMETER stop "Answer:"
    PARAMETER stop "System:"
    PARAMETER stop "```"
    
    PARAMETER temperature 0
    PARAMETER top_k 1
    
  2. Run the following command to create a custom model instance based on the Modelfile.

    ollama create Qwen2.5-Coder-14B-Qiskit -f ./path-to-model-file
    
    Note
    This process may take some time for Ollama to read the model file, initialize the model instance, and configure it according to the specifications provided.

Run the Qiskit Code Assistant model manually downloaded in Ollama

After the Qwen2.5-Coder-14B-Qiskit model has been set up in Ollama, run the following command to launch the model and interact with it in the terminal (in chat mode).

ollama run Qwen2.5-Coder-14B-Qiskit

Some useful commands:

  • ollama list - List models on your computer
  • ollama rm Qwen2.5-Coder-14B-Qiskit - Delete the model
  • ollama show Qwen2.5-Coder-14B-Qiskit - Show model information
  • ollama stop Qwen2.5-Coder-14B-Qiskit - Stop a model that is currently running
  • ollama ps - List which models are currently loaded
Manually deploy the Qiskit Code Assistant models in local through the llama-cpp-python package

An alternative to the Ollama application is the llama-cpp-python package, which is a Python binding for llama.cpp. It gives you more control and flexibility to run the GGUF model locally, and is ideal for users who wish to integrate the local model in their workflows and Python applications.

  1. Install llama-cpp-python
  2. Interact with the model from within your application using llama_cpp. For example:
from llama_cpp import Llama
 
model_path = <PATH-TO-GGUF-FILE>
 
model = Llama(
        model_path,
        seed=17,
        n_ctx=10000,
        n_gpu_layers=37, # to offload in gpu, but put 0 if all in cpu
    )
 
input = 'Generate a quantum circuit with 2 qubits'
raw_pred = model(input)["choices"][0]["text"]

You can also add text generation parameters to the model to customize the inference:

generation_kwargs = {
        "max_tokens": 512,
        "echo": False, # Echo the prompt in the output
        "top_k": 1
    }
 
raw_pred = model(input, **generation_kwargs)["choices"][0]["text"]
Manually deploy the Qiskit Code Assistant models in local through llama.cpp

Use the llama.cpp library

Another alternative is to use llama.cpp, an open-source library for performing LLM inference on a CPU with minimal setup. It provides low-level control over the model execution and is typically run from the command line, pointing to a local GGUF model file.

There are several ways to install llama.cpp on your machine:

Once installed, you can use llama.cpp to interact with GGUF models in conversation mode as follows:

# Use a local model file
llama-cli -m my_model.gguf -cnv
 
# Or download and run a model directly from Hugging Face
llama-cli -hf Qiskit/Qwen2.5-Coder-14B-Qiskit-GGUF -cnv

You can also launch an OpenAI-compatible API server for the model in the following way:

llama-server -hf Qiskit/Qwen2.5-Coder-14B-Qiskit-GGUF

Advanced parameters

With the llama-cli program, you can control the model generation using command-line options. For example, you can provide an initial “system” prompt using the -p/--prompt flag. In conversation mode (-cnv), this initial prompt acts as the system message. Otherwise, you can simply prepend any desired instruction to your prompt text. You can also adjust sampling parameters - for instance: temperature (--temp), top-k (--top-k), top-p (--top-p), repetition penalty (--repeat-penalty), and the seed to use (--seed). The following is an example invocation using these options:

llama-cli -hf Qiskit/Qwen2.5-Coder-14B-Qiskit-GGUF \
  -p "You are a friendly assistant." -cnv \
  --temp 0.7 \
  --top-k 50 \
  --top-p 0.95 \
  --repeat-penalty 1.1 \
  --seed 42

To ensure proper functionality of our Qiskit models, we recommend using the system prompt provided in our HF GGUF repositories: system prompt for mistral-small-3.2-24b-qiskit-GGUF, Qwen2.5-Coder-14B-Qiskit-GGUF, granite-3.3-8b-qiskit-GGUF, and granite-3.2-8b-qiskit-GGUF.

Manually connect extensions to local deployment

Use the VS Code extension and JupyterLab extension for the Qiskit Code Assistant to prompt the locally deployed Qiskit Code Assistant model. Once you have the Ollama application set up with the model, you can configure the extensions to connect to the local service.

Connect with the Qiskit Code Assistant VS Code extension

With the Qiskit Code Assistant VS Code extension, you can interact with the model and perform code completion while writing your code. This can work well for users looking for assistance writing Qiskit code for their Python applications.

  1. Install the Qiskit Code Assistant VS Code extension.
  2. In VS Code, go to the User Settings and set the Qiskit Code Assistant: Url to the URL of your local Ollama deployment (for example, http://localhost:11434).
  3. Reload VS Code by going to View > Command Palette... and selecting Developer: Reload Window.

The Qiskit Code Assistant model configured in Ollama should appear in the status bar and is then ready to use.

Connect with the Qiskit Code Assistant JupyterLab extension

With the Qiskit Code Assistant JupyterLab extension, you can interact with the model and perform code completion directly in your Jupyter Notebook. Users who predominantly work with Jupyter Notebooks can take advantage of this extension to further enhance their experience writing Qiskit code.

  1. Install the Qiskit Code Assistant JupyterLab extension.
  2. In JupyterLab, go to the Settings Editor and set the Qiskit Code Assistant Service API to the URL of your local Ollama deployment (for example, http://localhost:11434).

The Qiskit Code Assistant model configured in Ollama should appear in the status bar and is then ready to use.

Was this page helpful?
Report a bug, typo, or request content on GitHub.