Remote computing on Quandela Cloud

Here, we aim at showing how to connect to Quandela Cloud services to perform computation with real QPU and simulators remotely. We are going to use a simple two modes circuit.

Please note that other Cloud providers exist besides Quandela, see providers for additional information.

[1]:
import time
import math
from pprint import pprint
from tqdm.notebook import tqdm

import perceval as pcvl
from perceval.algorithm import Sampler

First, define your Perceval objects (circuit, input state, etc.) as usual.

[2]:
input_state = pcvl.BasicState([1, 1])

c = pcvl.Circuit(2)
c.add(0, pcvl.BS())
c.add(0, pcvl.PS(phi = math.pi/4))
c.add(0, pcvl.BS())

pcvl.pdisplay(c)
[2]:
../_images/notebooks_Remote_computing_4_0.svg

Now, visit cloud.quandela.com and login to see which QPU and simulators are available, as well as their specifications. You have to create a token that will let you use our cloud. You can save it once and for all in Perceval (you can even do it with a terminal). If your token changes, just call the same method again with the new token.

[4]:
# Save your token into Perceval persistent data, you only need to do it once
pcvl.save_token('YOUR_API_KEY')

Once you have chosen the platform you want your code executed on, all you have to do is to copy its name and define a RemoteProcessor with it. Don’t forget to give the platform access rights to your token. Note that simulator platform start with “sim:” and actual QPUs start with “qpu:”.

[3]:
remote_simulator = pcvl.RemoteProcessor("sim:ascella")

You can now access to the specifications of the platform directly in Perceval.

[4]:
specs = remote_simulator.specs
pcvl.pdisplay(specs["specific_circuit"])
[4]:
../_images/notebooks_Remote_computing_10_0.svg
[6]:
print("Platform constraints:")
pprint(specs["constraints"])
print("\nPlatform supported parameters:")
pprint(specs["parameters"])
Platform constraints:
{'max_mode_count': 12,
 'max_photon_count': 6,
 'min_mode_count': 1,
 'min_photon_count': 1}

Platform supported parameters:
{'HOM': 'indistinguishability value, using HOM model (default 0.92)',
 'final_mode_number': 'number of modes of the output states. States having a '
                      'photon on unused modes will be ignored. Useful when '
                      'using computed circuits (default input_state.m)',
 'g2': 'g2 value (default 0.003)',
 'min_detected_photons': 'minimum number of detected photons to keep a state '
                         '(default input_state.n)',
 'phase_imprecision': 'imprecision on the phase shifter phases (default 0)',
 'ppnr': 'enable Pseudo Photon Number Resolving detection on a given set of '
         'modes (pass a list of indexes). Available modes for PPNR are 0, 1 '
         'and 2 (default [] i.e. PPNR disabled on all modes). PPNR has a '
         'chance of 1 - 1/2^N to detect 2 photons when N photons are output in '
         'the same mode.',
 'transmittance': 'probability that an emitted photon is sent to the system '
                  'and is detected (default 0.06)'}

Now, we can specify parameters in order to tune our computation. For specific parameters, we have to use a special set_parameter function (or set_parameters).

[7]:
remote_simulator.set_circuit(c)
remote_simulator.with_input(input_state)

remote_simulator.set_parameters({  # Noisy source parameters
    "HOM": .95,
    "transmittance": .1,
    "g2": .01
})
remote_simulator.min_detected_photons_filter(1)  # Output state filering on the basis of detected photons

We can now use the Sampler with our RemoteProcessor. You have to set a maximum shots threshold (max_shots_per_call named parameter) when creating a Sampler with a remote platform. Local simulations do not require this threshold. A shot is any detected event containing at least one photon, it is easy to explain, easy to measure. This shot threshold will prevent the user from consuming too many QPU resources, as once it’s reached, the acquisition stops. Shots up to this threshold can be reached for all jobs generated by Sampler calls (e.g. calling sample_count thrice can lead to the use of at most 3*max_shots_per_call shots).

[8]:
nsamples = 200000
sampler = Sampler(remote_simulator, max_shots_per_call=nsamples)  # You have to set a 'max_shots_per_call' named parameter
# Here, with `min_detected_photons_filter` set to 1, all shots are de facto samples of interest.
# Thus, in this particular case, the expected sample number can be used as the shots threshold.

sampler.default_job_name = "My sampling job"  # All jobs created by this sampler instance will have this custom name on the cloud

remote_job = sampler.sample_count.execute_async(nsamples)  # Create a job
print(remote_job.id)  # Once created, the job was assigned a unique id
ba766f74-38d4-4263-9795-c315770f29b9

The request has now been sent to a remote platform through the cloud. As it is an asynchronous computation (execute_async), other computations can be performed locally before the results are retrieved. In this example, let’s just wait for the end of the computation. If you go to the Quandela Cloud website again, you can see the job and its completion status.

[9]:
previous_prog = 0
with tqdm(total=1, bar_format='{desc}{percentage:3.0f}%|{bar}|') as tq:
    tq.set_description(f'Get {nsamples} samples from {remote_simulator.name}')
    while not remote_job.is_complete:
        tq.update(remote_job.status.progress/100-previous_prog)
        previous_prog = remote_job.status.progress/100
        time.sleep(1)
    tq.update(1-previous_prog)
    tq.close()

print(f"Job status = {remote_job.status()}")
Job status = SUCCESS

Once the previous cell has run to the end, the job is finished (again, you can see its status on the website). Let’s retrieve the results to do some computation. In this case, the computation is expected to be fast (unless the simulator is unavailable or there are a lot of jobs queued), so we can use the remote_job object we created previously. If the computation lasted for a long time, we could have shut down our computer, then turn it back on and finally created a new job object by directly retrieving the results. The job id which is visible on the website, is required to resume a job and load its results.

[10]:
''' # To retrieve your job using a job id
remote_processor = pcvl.RemoteProcessor("sim:ascella", token_qcloud)
async_job = remote_processor.resume_job(id)
'''

results = remote_job.get_results()
print(results['results'])
{
  |1,0>: 97045
  |0,1>: 97458
  |1,1>: 5497
}

You can run the same sampling on the corresponding QPU. In order to manage your QPU credits, you can estimate the number of shots you’d need for a particular data acquisition. Please note that the maximum shots and maximum samples number act as a dual threshold system. As soon as one of these thresholds is exceeded, the acquisition stops and the results are returned.

[11]:
qpu_platform_name = "qpu:ascella"
nsamples = 200000

remote_qpu = pcvl.RemoteProcessor(qpu_platform_name)
remote_qpu.set_circuit(c)
remote_qpu.with_input(input_state)

print("With this setup:")
remote_qpu.min_detected_photons_filter(2)
required_shots = remote_qpu.estimate_required_shots(nsamples=nsamples)
print(f"To gather {nsamples} 2-photon coincidences on {qpu_platform_name}, you would need around {required_shots} shots.")

remote_qpu.min_detected_photons_filter(1)
required_shots = remote_qpu.estimate_required_shots(nsamples=nsamples)
print(f"To gather {nsamples} photon events (with at least 1 photon) on {qpu_platform_name}, you would need exactly {required_shots} shots.")
With this setup:
To gather 200000 2-photon coincidences on qpu:ascella, you would need around 12933333 shots.
To gather 200000 photon events (with at least 1 photon) on qpu:ascella, you would need exactly 200000 shots.
[13]:
sampler_on_qpu = Sampler(remote_qpu, max_shots_per_call=nsamples)

remote_job = sampler_on_qpu.sample_count
remote_job.name = "QPU sampling"  # You may also specify a name to individual jobs
remote_job.execute_async(nsamples);
[14]:
previous_prog = 0
with tqdm(total=1, bar_format='{desc}{percentage:3.0f}%|{bar}|') as tq:
    tq.set_description(f'Get {nsamples} samples from {remote_qpu.name}')
    while not remote_job.is_complete:
        tq.update(remote_job.status.progress/100-previous_prog)
        previous_prog = remote_job.status.progress/100
        time.sleep(1)
    tq.update(1-previous_prog)
    tq.close()

print(f"Job status = {remote_job.status()}")
Job status = SUCCESS
[12]:
results = remote_job.get_results()
print(results['results'])
{
  |1,0>: 211538
  |0,1>: 178621
  |1,1>: 5013
}