Remote Computing

In this tutorial, we are going to send computations on remote platforms using Perceval. A “platform” is an online simulator or an actual physical photonic QPU available through a Cloud provider.

The default provider is the Quandela Cloud but please note that others exist, see providers for additional information.

I. Updated Hello World

Remember the Hello World code from the Getting Started page? Let’s update it in order to run the same simulation on a remote platform.

[1]:
import time
import math
from pprint import pprint
from tqdm.notebook import tqdm

from perceval import BS, PS, BasicState, RemoteConfig, RemoteProcessor, pdisplay, NoiseModel
from perceval.algorithm import Sampler
[2]:
# The first few lines remain unchanged
input_state = BasicState("|1,1>")
circuit = BS()
noise_model = NoiseModel(transmittance=0.05, indistinguishability=0.85)

# In this version, the local Processor is replaced by a RemoteProcessor which is constructed slightly differently:
# - The back-end name "SLOS" is replaced by the name of the platform we want to send our computation on (here, "sim:slos").
# - The 2nd parameter is an authentication token to the Cloud provider we're using (by default, the Quandela Cloud).
#    Let's assume we have a valid token, for now. We'll see how to generate one, later on, in this tutorial.
# - Noise parameters can still be set
processor = RemoteProcessor("sim:slos", "a valid authentication token", noise=noise_model)
processor.add(0, circuit)  # Add the circuit to the remote processor

# Starting from here, Processor and RemoteProcessor are fully interchangeable
processor.min_detected_photons_filter(1)
processor.with_input(input_state)

sampler = Sampler(processor, max_shots_per_call=10_000)  # Here, the max_shots_per_call parameter is required to limit your credit usage on the Cloud
samples = sampler.sample_count(10_000)['results']
probs = sampler.probs()['results']
print(f"Samples: {samples}")
print(f"Probabilities: {probs}")
Samples: {
  |0,2>: 135
  |1,0>: 4841
  |1,1>: 22
  |2,0>: 127
  |0,1>: 4875
}
Probabilities: {
        |1,1>: 0.001923
        |1,0>: 0.487179
        |0,1>: 0.487179
        |2,0>: 0.011859
        |0,2>: 0.011859
}

Similar enough, isn’t it?

II. Preparing for remote computing

Connecting to a remote provider requires a bit of configuration. You need to be able to connect and authenticate to the remote service.

Quandela Cloud authentication is managed through user tokens. These are created on the Cloud website, from your user account page. If you haven’t done so beforehand, please visit cloud.quandela.com and create an account. You’ll be able to check what platforms are available to you, as well as their specifications. You have to generate a token in order to use any platform on the Cloud using a Perceval script. A token is personal and should not be shared. You can create as many as you wish on a single account.

You can setup your connection information once and for all on a given computer, using Perceval persistent data system:

[3]:
# Save your token and proxy configuration in a RemoteConfig instance, then call save().
# You only need to do this once per machine, data will be shared among different Perceval installs on the same machine.
# If your token changes, you'll need to redo this step once.
remote_config = RemoteConfig()
remote_config.set_token("MY_TOKEN")
remote_config.set_proxies({"https": "socks5h://USER:PASSWORD@HOST:PORT"})  # Optional proxy configuration
remote_config.save()

If you need to use multiple authentication information within the same script, you will have to enter the token value for each RemoteProcessor, as shown in the updated Hello World.

III. Connect to a platform

Once you have chosen the platform you want your code executed on, all you have to do is to copy its name and define a RemoteProcessor with it. Don’t forget to give the platform access rights to your token. Note that all simulators platform name start with “sim:” where an actual QPU starts with “qpu:”.

[4]:
# A RemoteProcessor works mostly like a regular Processor except that it performs https requests instead of local simulations
remote_simulator = RemoteProcessor("sim:belenos")  # For instance, connect to sim:belenos

You can now access to the specifications of the platform directly in Perceval.

[5]:
specs = remote_simulator.specs
pdisplay(specs["specific_circuit"])
[5]:
../_images/notebooks_Remote_Computation_Tutorial_13_0.svg
[6]:
print("Platform constraints:")
pprint(specs["constraints"])
print("\nPlatform supported parameters:")
pprint(specs["parameters"])
Platform constraints:
{'max_mode_count': 24,
 'max_photon_count': 12,
 'min_mode_count': 1,
 'min_photon_count': 1,
 'support_multi_photon': False}

Platform supported parameters:
{'final_mode_number': 'number of modes of the output states. States having a '
                      'photon on unused modes will be ignored. Useful when '
                      'using computed circuits (default input_state.m)',
 'min_detected_photons': 'minimum number of detected photons to keep a state '
                         '(default input_state.n)',
 'use_photon_recycling': 'boolean enabling the Photon Recycling noise '
                         'mitigation process. (default enabled whenever it can '
                         'be used)'}

IV. Run a remote simulation

Now, we can setup our computation. In order to give a value to a “platform supported parameter”, you have to use a special set_parameter function (or set_parameters to set multiple at once).

[7]:
input_state = BasicState([1, 1])
c = BS() // PS(phi=math.pi/4) // BS()

remote_simulator.set_circuit(c)
remote_simulator.min_detected_photons_filter(1)  # Output state filtering on the basis of detected photons
remote_simulator.with_input(input_state)

remote_simulator.noise = NoiseModel(indistinguishability=.95, transmittance=.1, g2=.01)  # You can inject noise after the object creation

We can now use the Sampler with our RemoteProcessor. You have to set a maximum shots threshold (max_shots_per_call named parameter) when creating a Sampler with a remote platform. Local simulations do not require this threshold. A shot is any detected event containing at least one photon, it is easy to explain, easy to measure. This shot threshold will prevent the user from consuming too many QPU resources, as once it’s reached, the acquisition stops. Shots up to this threshold can be reached for all jobs generated by Sampler calls (e.g. calling sample_count thrice can lead to the use of at most 3*max_shots_per_call shots).

[8]:
nsamples = 200000
sampler = Sampler(remote_simulator, max_shots_per_call=nsamples)  # You have to set a 'max_shots_per_call' named parameter
# Here, with `min_detected_photons_filter` set to 1, all shots are de facto samples of interest.
# Thus, in this particular case, the expected sample number can be used as the shots threshold.

sampler.default_job_name = "My sampling job"  # All jobs created by this sampler instance will have this custom name on the cloud

# Create a job a run it asynchronously (with execute_async method). Calling execute_sync, or __call__ would run the job synchronously
# and would thus block the script execution until the job ends and results are returned.
remote_job = sampler.sample_count.execute_async(nsamples)
print(remote_job.id)  # Once created, the job was assigned a unique id
dcb21813-8648-49e8-841c-e21889e94af1

The request has now been sent to a remote platform through the cloud. As it is asynchronous, other computations can be performed locally before the results are retrieved. In this example, let’s just wait for the end of the computation. If you go to the Quandela Cloud website again, you can now see the job and its progress.

[9]:
previous_prog = 0
with tqdm(total=1, bar_format='{desc}{percentage:3.0f}%|{bar}|') as tq:
    tq.set_description(f'Get {nsamples} samples from {remote_simulator.name}')
    while not remote_job.is_complete:
        tq.update(remote_job.status.progress/100-previous_prog)
        previous_prog = remote_job.status.progress/100
        time.sleep(1)
    tq.update(1-previous_prog)
    tq.close()

print(f"Job status = {remote_job.status()}")
Job status = SUCCESS

Once the previous cell has run to the end, the job is finished (again, you can see its status on the website). Let’s retrieve the results to do some computation. In this case, the computation is expected to be fast (unless the simulator is unavailable or there are a lot of jobs queued), so we can use the remote_job object we created previously. If the computation lasted for a long time, we could have shut down our computer, then turn it back on and finally created a new job object by directly retrieving the results. The job id which is visible on the website, is required to resume a job and load its results.

[10]:
''' # To retrieve your job using a job id
remote_processor = pcvl.RemoteProcessor("sim:belenos")
remote_job = remote_processor.resume_job(id)
'''

results = remote_job.get_results()
pdisplay(results['results'])
state count
|1,0> 95051
|0,1> 94441
|1,1> 5465
|0,2> 2551
|2,0> 2484
|2,1> 5
|1,2> 3

V. Use a real photonic QPU

You can run the same sampling on the corresponding QPU. In order to manage your QPU credits, you can estimate the number of shots you’d need for a particular data acquisition. Please note that the maximum shots and maximum samples number act as a dual threshold system. As soon as one of these thresholds is exceeded, the acquisition stops and the results are returned.

[11]:
qpu_platform_name = "qpu:belenos"
nsamples = 200000

remote_qpu = RemoteProcessor(qpu_platform_name)
remote_qpu.set_circuit(c)
remote_qpu.with_input(input_state)

print("With this setup:")
remote_qpu.min_detected_photons_filter(2)
required_shots = remote_qpu.estimate_required_shots(nsamples=nsamples)
print(f"To gather {nsamples} 2-photon coincidences on {qpu_platform_name}, you would need around {required_shots} shots.")

remote_qpu.min_detected_photons_filter(1)
required_shots = remote_qpu.estimate_required_shots(nsamples=nsamples)
print(f"To gather {nsamples} photon events (with at least 1 photon) on {qpu_platform_name}, you would need exactly {required_shots} shots.")
With this setup:
To gather 200000 2-photon coincidences on sim:belenos, you would need around 12933333 shots.
To gather 200000 photon events (with at least 1 photon) on sim:belenos, you would need exactly 200000 shots.
[12]:
sampler_on_qpu = Sampler(remote_qpu, max_shots_per_call=nsamples)

remote_job = sampler_on_qpu.sample_count
remote_job.name = "QPU sampling"  # You may also specify a name to individual jobs
remote_job.execute_async(nsamples);
[13]:
previous_prog = 0
with tqdm(total=1, bar_format='{desc}{percentage:3.0f}%|{bar}|') as tq:
    tq.set_description(f'Get {nsamples} samples from {remote_qpu.name}')
    while not remote_job.is_complete:
        tq.update(remote_job.status.progress/100-previous_prog)
        previous_prog = remote_job.status.progress/100
        time.sleep(1)
    tq.update(1-previous_prog)
    tq.close()

print(f"Job status = {remote_job.status()}")
Job status = SUCCESS
[14]:
results = remote_job.get_results()
pdisplay(results['results'])
state count
|0,1> 97117
|1,0> 96864
|1,1> 3234
|0,2> 1395
|2,0> 1388
|1,2> 2

Now, you know the basic ways to compute results using Quandela Cloud.