top of page
Writer's pictureMadjid Tehrani

Navigating Quantum Cybersecurity Analytics: A Daily Exploration by CyberSec-DMS

Updated: May 5

Day 3: Hyperparameter Tuning for QSVM-simple example using Weights & Biases


Welcome again to our bustling digital cityscape, where quantum innovation continues to equip our super-powered police cars with Quantum Support Vector Machines (QSVM) as the first algorithm to chase down the elusive botnet robbers and then peel back the layers to uncover the intricacies of these futuristic vehicles. Before evaluating the readiness of QSVM for the complex words of MLSecOps, let's change some parameters like reps, optimizer, and feature maps to see if the result of QSVM will be better.


We use weights and biases to monitor our model using the code below. We must use our custom-built SVM and quantum kernel to control optimizers, as SVM does not support optimizers in hyperparameter tuning. Also, we need to do a lot of exception handling for circuit calls over real devices, so we need our quantum kernel. Examining optimizers is essential because every experiment with a random 120-point circle dataset will result in 28,800 quantum circuit runs. This poses a heavy load for today's NISQ devices, which must be more error-free and stable enough to handle such large numbers. Therefore, searching for optimizers that consume fewer circuits is also important. Let's config W&B first (don't forget to make a user here: https://wandb.ai/site)


!pip install wandb -qU
import wandbwandb.login() --relogin

After setting up W&B, we can use the code below for our empirical hyperparameter tuning. This is not a comprehensive hyperparameter tuning; it’s just an example to demonstrate that Quantum Machine Learning requires both hyperparameter tuning and model tracking.

import wandb
from qiskit import transpile, BasicAer, QuantumCircuit
from qiskit.circuit.library import ZZFeatureMap, PauliFeatureMap, ZFeatureMap
import numpy as np
from sklearn.metrics import accuracy_score
from scipy.optimize import minimize
import hashlib
import csv
import os

backend = BasicAer.get_backend("qasm_simulator")
shots = 1024
dimension = 2

# Dictionary of feature maps
feature_maps = {
    'PauliFeatureMap': PauliFeatureMap(dimension, reps=1),
    'ZZFeatureMap': ZZFeatureMap(dimension, reps=1),
    'ZFeatureMap': ZFeatureMap(dimension, reps=1)}

optimizers = ['COBYLA', 'SLSQP']
circuit_calls = 0 

def circle(num_points=120):
    points = 1 - 2 * np.random.random((num_points, 2))
    radius = 0.6
    labels = [1 if np.linalg.norm(point) > radius else -1 for point in points]
    return points, labels

def evaluate_kernel(x_i, x_j):
    global circuit_calls
    circuit = QuantumCircuit(dimension)
    circuit.compose(feature_map.assign_parameters(x_i), inplace=True)
    circuit.compose(feature_map.assign_parameters(x_j).inverse(), inplace=True)
    circuit.measure_all()
    transpiled = transpile(circuit, backend)
    job = backend.run(transpiled, shots=shots)
    result = job.result()
    counts = result.get_counts(transpiled)
    circuit_calls += 1
     return counts.get("0" * dimension, 0) / shots

def custom_kernel_matrix(X1, X2):
    return np.array([[evaluate_kernel(x_i, x_j) for x_j in X2] for x_i in X1])

def compute_hash(data):
    data_str = str(data).encode('utf-8')
    return hashlib.sha256(data_str).hexdigest()

def lookup_kernel_matrix_in_csv(data_hash):
    csv_filename = '/kernel_matrix_cache.csv'
    if not os.path.exists(csv_filename):
        return None
    with open(csv_filename, 'r') as file:
        reader = csv.reader(file)
        for row in reader:
            if row[0] == data_hash:
                return np.array([list(map(float, matrix_row.split())) for matrix_row in row[1:]])
    return None

def save_kernel_matrix_to_csv(data_hash, kernel_matrix):
    csv_filename = '/kernel_matrix_cache.csv'
    if not os.path.exists(csv_filename):
        with open(csv_filename, 'w') as f:
            pass    with open(csv_filename, 'a') as file:
        writer = csv.writer(file)
        flattened_matrix = [" ".join(map(str, row)) for row in kernel_matrix]
        writer.writerow([data_hash] + flattened_matrix)

def optimized_custom_kernel_matrix(X1, X2, fmap_name, reps):
    data_hash = compute_hash((X1, fmap_name, reps))
    kernel_matrix = lookup_kernel_matrix_in_csv(data_hash)
    return kernel_matrix

def loss(params, X, y, C=10000000):
    w = params[:-1]
    b = params[-1]
    hinge_loss = np.maximum(0, 1 - y * (X.dot(w) + b))
    return np.dot(w, w) + C * np.sum(hinge_loss)

def custom_svm_fit(X, y, optimizer, kernel_function=custom_kernel_matrix):
    kernel_matrix = kernel_function(X, X)
    initial_params = np.random.rand(kernel_matrix.shape[1] + 1)
    result = minimize(fun=lambda params: loss(params, kernel_matrix, y),
                      x0=initial_params,
                      method=optimizer)
    wandb.log({"weights": result.x[:-1], "bias": result.x[-1]})
    return result.x[:-1], result.x[-1]

def custom_svm_predict(X, w, b, kernel_function=custom_kernel_matrix):
    kernel_matrix = kernel_function(X, X)
    return np.sign(kernel_matrix.dot(w) + b)

def compute_and_save_all_kernel_matrices():
    for reps in range(1, 4):
        for name in feature_maps.keys():
            global feature_map
            feature_map = feature_maps[name]
            feature_map.reps = reps
            data_hash = compute_hash((points_circle, name, reps))
            if lookup_kernel_matrix_in_csv(data_hash) is None:
                kernel_matrix = custom_kernel_matrix(points_circle, points_circle)
                save_kernel_matrix_to_csv(data_hash, kernel_matrix)
                print(f"Saved kernel matrix for {name} with reps={reps}")

def optimized_train():
    for reps in range(1, 4):
        for name in feature_maps.keys():
            global feature_map
            feature_map = feature_maps[name]
            feature_map.reps = reps
            for optimizer in optimizers:
                global circuit_calls
                circuit_calls = 0
                 run = wandb.init(project="quantum_svm03", group=name, name=f'{name}_reps_{reps}_opt_{optimizer}', config={"reps": reps, "feature_map": name, "optimizer": optimizer})
                w, b = custom_svm_fit(points_circle, labels_circle, optimizer, kernel_function=lambda X1, X2: optimized_custom_kernel_matrix(X1, X2, name, reps))
                predicted = custom_svm_predict(points_circle, w, b, kernel_function=lambda X1, X2: optimized_custom_kernel_matrix(X1, X2, name, reps))
                accuracy = accuracy_score(labels_circle, predicted)
                run.log({"accuracy": accuracy, "circuit_calls": circuit_calls})
                print(f'Accuracy for {name} with reps={reps} and optimizer={optimizer}: {accuracy * 100}%')
                print(f'Number of circuit calls for {name} with reps={reps} and optimizer={optimizer}: {circuit_calls}')
                run.finish()points_circle, labels_circle = circle()

# Sweep configuration
sweep_config = {
    'method': 'grid',
    'metric': {
      'name': 'accuracy',
      'goal': 'maximize'
       },    'parameters': {
        'reps': {
            'values': [1, 2, 3]
        },
        'optimizer': {
            'values': ['COBYLA', 'SLSQP']
        },
        'feature_map': {
            'values': ['PauliFeatureMap', 'ZZFeatureMap', 'ZFeatureMap']
        }
    }
}

# Precompute all kernel matrices and save them
compute_and_save_all_kernel_matrices()

# Initialize the sweep
sweep_id = wandb.sweep(sweep_config, project="quantum_svm03")

# Run the sweep
wandb.agent(sweep_id, optimized_train)

This will be a lengthy run, around one hour over Colab Pro. Despite computing all kernels in advance when possible to minimize repetitive costs, they remain both time-consuming and expensive in the realm of quantum computing. We can see the results below.



The approach mentioned above is effective when you want to dissect the quantum algorithm to achieve near 99.91% accuracy, which isn’t feasible using the standard API.


best set= two reps, COBYLA/SLSQP , ZFeatureMap(for maximum accuracy)

However, we can use the standard API, known as QSVC. In this scenario, while we cannot change optimizers, adjusting reps or SVM hyperparameters is possible. Here is a sample code for QSVC using our original dataset: https://ieee-dataport.org/open-access/botnet-dga-dataset#files! You don’t need to be reminded to install Qiskit-QML using the command: !pip install qiskit-machine-learning.


#Get data and prepare it
!wget https://aq5efd7d2644dd406cb3ec2d.blob.core.windows.net/dga/BotnetDgaDataset_1000.csv
import csv
import os
import numpy as np
from sklearn.datasets import make_blobs
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler

datafilename="BotnetDgaDataset_1000.csv"
cwd=os.getcwd()
mycsv=cwd+"/"+datafilename
print(mycsv)
def load_data(filepath):
    with open(filepath) as csv_file:
        data_file = csv.reader(csv_file)
        temp = next(data_file)
        n_samples = 1000
        n_features = 7
        data = np.empty((n_samples, n_features))
        target = np.empty((n_samples,), dtype=int)

        for i, ir in enumerate(data_file):
            data[i] = np.asarray(ir[:-1], dtype=np.float64)
            target[i] = np.asarray(ir[-1], dtype=int)
    return data, targetfeatures, labels = load_data(mycsv)
features = MinMaxScaler(feature_range=(0, np.pi)).fit_transform(features)
train_features, test_features, train_labels, test_labels = train_test_split(
    features, labels, train_size=700, shuffle=False
)
#prepare Quantum Kernel
from qiskit import BasicAer
from qiskit.circuit.library import ZFeatureMap
from qiskit.utils import algorithm_globals
from qiskit_machine_learning.kernels import FidelityQuantumKernel

algorithm_globals.random_seed = 12345

feature_map = ZFeatureMap(feature_dimension=7, reps=1)

qkernel = FidelityQuantumKernel(feature_map=feature_map)

When you read the code below, it becomes evident that the QSVC API offers only a few opportunities for hyperparameter tuning.


from qiskit_machine_learning.algorithms import QSVC
import time
from qiskit.algorithms.optimizers import SLSQP, SPSA
from qiskit.circuit.library import ZZFeatureMap

num_features = features.shape[1]

feature_map = ZZFeatureMap(feature_dimension=num_features, reps=1)
qsvc = QSVC(quantum_kernel=qkernel)

import time
QSVC_start=time.perf_counter()
# training
qsvc.fit(train_features, train_labels)
# testing
qsvc_score = qsvc.score(test_features, test_labels)QSVC_end=time.perf_counter()
print(f"QSVC Accuracy: {qsvc_score}")
print(f"time for AER simulator=, {QSVC_end-QSVC_start}") 

The result is an accuracy of 86.34% and 3689.7 seconds over AER.



On the NISQs deployed on AWS Braket, it may take up to three weeks to complete if this API succeeds at all! There is a need to harden the architecture when using expensive quantum resources:



The primary issue with this API is that it doesn’t allow for the management of the circuit calls, authentication sessions, saving results in downtimes, exception handling, and internet and connectivity issues. With today’s NISQs, various issues can arise that compromise the results, especially given our reliance on numerous circuit calls. This paves the way for a superior version of QSVC, Pegasos, which we will discuss in our next note.


Stay with us to see how PegasosQSVC works, and remember to follow our LinkedIn and Twitter, where we will show how hybrid quantum machine learning will change the realm of cyber defense.



10 views0 comments

Comments


bottom of page