You're reading the documentation of the v0.11. For the latest released version, please have a look at v0.12.
Remote computing on Quandela Cloud
Here, we aim at showing how to connect to Quandela Cloud services to perform computation with real QPU and simulators remotely. We are going to use a simple two modes circuit.
Please note that other Cloud providers exist besides Quandela, see providers for additional information.
[1]:
import time
import math
from pprint import pprint
from tqdm.notebook import tqdm
import perceval as pcvl
from perceval.algorithm import Sampler
First, define your Perceval objects (circuit, input state, etc.) as usual.
[2]:
input_state = pcvl.BasicState([1, 1])
c = pcvl.Circuit(2)
c.add(0, pcvl.BS())
c.add(0, pcvl.PS(phi = math.pi/4))
c.add(0, pcvl.BS())
pcvl.pdisplay(c)
[2]:
Now, visit cloud.quandela.com and login to see which QPU and simulators are available, as well as their specifications. You have to create a token that will let you use our cloud. You can save it once and for all in Perceval (you can even do it with a terminal). If your token changes, just call the same method again with the new token.
[4]:
# Save your token into Perceval persistent data, you only need to do it once
pcvl.save_token('YOUR_API_KEY')
Once you have chosen the platform you want your code executed on, all you have to do is to copy its name and define a RemoteProcessor
with it. Don’t forget to give the platform access rights to your token. Note that simulator platform start with “sim:” and actual QPUs start with “qpu:”.
[4]:
remote_simulator = pcvl.RemoteProcessor("sim:altair")
You can now access to the specifications of the platform directly in Perceval.
[5]:
specs = remote_simulator.specs
pcvl.pdisplay(specs["specific_circuit"])
[5]:
[6]:
print("Platform constraints:")
pprint(specs["constraints"])
print("\nPlatform supported parameters:")
pprint(specs["parameters"])
Platform constraints:
{'max_mode_count': 20,
'max_photon_count': 10,
'min_mode_count': 1,
'min_photon_count': 1}
Platform supported parameters:
{'HOM': 'indistinguishability value, using HOM model (default 0.92)',
'final_mode_number': 'number of modes of the output states. States having a '
'photon on unused modes will be ignored. Useful when '
'using computed circuits (default input_state.m)',
'g2': 'g2 value (default 0.003)',
'min_detected_photons': 'minimum number of detected photons to keep a state '
'(default input_state.n)',
'phase_imprecision': 'imprecision on the phase shifter phases (default 0)',
'transmittance': 'probability that an emitted photon is sent to the system '
'and is detected (default 0.06)',
'use_photon_recycling': 'boolean enabling the Photon Recycling noise '
'mitigation process. (default enabled whenever it can '
'be used)'}
Now, we can specify parameters in order to tune our computation. For specific parameters, we have to use a special set_parameter
function (or set_parameters
).
[9]:
remote_simulator.set_circuit(c)
remote_simulator.min_detected_photons_filter(1) # Output state filering on the basis of detected photons
remote_simulator.with_input(input_state)
remote_simulator.noise = pcvl.NoiseModel(indistinguishability=.95, transmittance=.1, g2=.01)
We can now use the Sampler
with our RemoteProcessor
. You have to set a maximum shots threshold (max_shots_per_call
named parameter) when creating a Sampler
with a remote platform. Local simulations do not require this threshold. A shot is any detected event containing at least one photon, it is easy to explain, easy to measure. This shot threshold will prevent the user from consuming too many QPU resources, as once it’s reached, the acquisition stops. Shots up to this threshold
can be reached for all jobs generated by Sampler
calls (e.g. calling sample_count
thrice can lead to the use of at most 3*max_shots_per_call
shots).
[10]:
nsamples = 200000
sampler = Sampler(remote_simulator, max_shots_per_call=nsamples) # You have to set a 'max_shots_per_call' named parameter
# Here, with `min_detected_photons_filter` set to 1, all shots are de facto samples of interest.
# Thus, in this particular case, the expected sample number can be used as the shots threshold.
sampler.default_job_name = "My sampling job" # All jobs created by this sampler instance will have this custom name on the cloud
remote_job = sampler.sample_count.execute_async(nsamples) # Create a job
print(remote_job.id) # Once created, the job was assigned a unique id
aabaf27d-cad1-4afb-a4bf-1be5437e62f6
The request has now been sent to a remote platform through the cloud. As it is an asynchronous computation (execute_async
), other computations can be performed locally before the results are retrieved. In this example, let’s just wait for the end of the computation. If you go to the Quandela Cloud website again, you can see the job and its completion status.
[11]:
previous_prog = 0
with tqdm(total=1, bar_format='{desc}{percentage:3.0f}%|{bar}|') as tq:
tq.set_description(f'Get {nsamples} samples from {remote_simulator.name}')
while not remote_job.is_complete:
tq.update(remote_job.status.progress/100-previous_prog)
previous_prog = remote_job.status.progress/100
time.sleep(1)
tq.update(1-previous_prog)
tq.close()
print(f"Job status = {remote_job.status()}")
Job status = SUCCESS
Once the previous cell has run to the end, the job is finished (again, you can see its status on the website). Let’s retrieve the results to do some computation. In this case, the computation is expected to be fast (unless the simulator is unavailable or there are a lot of jobs queued), so we can use the remote_job
object we created previously. If the computation lasted for a long time, we could have shut down our computer, then turn it back on and finally created a new job object by
directly retrieving the results. The job id which is visible on the website, is required to resume a job and load its results.
[12]:
''' # To retrieve your job using a job id
remote_processor = pcvl.RemoteProcessor("sim:altair", token_qcloud)
remote_job = remote_processor.resume_job(id)
'''
results = remote_job.get_results()
print(results['results'])
{
|1,0>: 97231
|1,1>: 5591
|0,1>: 97178
}
You can run the same sampling on the corresponding QPU. In order to manage your QPU credits, you can estimate the number of shots you’d need for a particular data acquisition. Please note that the maximum shots and maximum samples number act as a dual threshold system. As soon as one of these thresholds is exceeded, the acquisition stops and the results are returned.
[3]:
qpu_platform_name = "qpu:altair"
nsamples = 200000
remote_qpu = pcvl.RemoteProcessor(qpu_platform_name)
remote_qpu.set_circuit(c)
remote_qpu.with_input(input_state)
print("With this setup:")
remote_qpu.min_detected_photons_filter(2)
required_shots = remote_qpu.estimate_required_shots(nsamples=nsamples)
print(f"To gather {nsamples} 2-photon coincidences on {qpu_platform_name}, you would need around {required_shots} shots.")
remote_qpu.min_detected_photons_filter(1)
required_shots = remote_qpu.estimate_required_shots(nsamples=nsamples)
print(f"To gather {nsamples} photon events (with at least 1 photon) on {qpu_platform_name}, you would need exactly {required_shots} shots.")
With this setup:
To gather 200000 2-photon coincidences on qpu:altair, you would need around 17104752 shots.
To gather 200000 photon events (with at least 1 photon) on qpu:altair, you would need exactly 200000 shots.
[13]:
sampler_on_qpu = Sampler(remote_qpu, max_shots_per_call=nsamples)
remote_job = sampler_on_qpu.sample_count
remote_job.name = "QPU sampling" # You may also specify a name to individual jobs
remote_job.execute_async(nsamples);
[14]:
previous_prog = 0
with tqdm(total=1, bar_format='{desc}{percentage:3.0f}%|{bar}|') as tq:
tq.set_description(f'Get {nsamples} samples from {remote_qpu.name}')
while not remote_job.is_complete:
tq.update(remote_job.status.progress/100-previous_prog)
previous_prog = remote_job.status.progress/100
time.sleep(1)
tq.update(1-previous_prog)
tq.close()
print(f"Job status = {remote_job.status()}")
Job status = SUCCESS
[12]:
results = remote_job.get_results()
print(results['results'])
{
|1,0>: 211538
|0,1>: 178621
|1,1>: 5013
}