Slow Simulation with losses

I’m trying to simulate a rather simple circuit that includes some lossy channels, generated by the code at the end of the post.
However, the simulation takes really long (10 mins) for only 5 photons in 10 modes and runs out of memory for 6 photons in 12 modes.
I’m running on a Intel I7, 11th generation with 8 cores and 32GB of RAM.

Am I doing something wrong, that such a simple experiment requires that much resources?


from random import random
import perceval as pcvl

def qpu(width, loss):
    p = pcvl.Processor("SLOS", m_circuit=width)
    for i in range(width-1):

    return p


Running the simulation:

m = 12
ca = pcvl.algorithm.Analyzer(qpu(m,0.1),input_states=[pcvl.BasicState([1,0]*m/2)],

Hello Tobias,

Thanks for your message.

As you see, we still have work to do on optimizing simulation with components that cannot be represented by a unitary matrix (such as LC)!
If you are trying to simulate balanced losses throughout all modes (i.e. all Loss Channel with the same loss at the beginning or at the end of the circuit - which is not what your code is doing), you can get back to a unitary circuit simulation by using the losses parameter of Perceval source model:

    def qpu(width, loss):
        p = pcvl.Processor("SLOS", m_circuit=width, source=pcvl.Source(losses=loss))
        for i in range(width-1):
        p.mode_post_selection(1)  # Do not filter states where multiple photons are detected on 1 mode
        return p

It took me 2.5s to run your code with this change on my laptop which seems slower than your computer.

If you really need to set your Loss Channel as you planned (i.e. between the bottom output of a beam splitter and the top input of the next one) and are interested in all possible output states, as of Perceval 0.7, I won’t be able to help you optimize your computation.
However, we’re working on some major improvements for computation back-ends that should optimize Perceval in the future.

Best regards,

1 Like

Hey Eric,
this is a pity, but thanks for your reply.
I already tried other “loss-models” before and you’re right they work better.

However, I get from your response that for the type of circuit I’m running (with losses inside the circuit), there is currently no specific backend that would support this kind of simulation.

It would be a really great feature to have the ability to run such simulation, so :+1: from my side for developing it :slight_smile:


Could you maybe elaborate a little more, why this is such a memory expensive process? Is it that the complexity of the state/matrix explodes or because the code is not yet optimized for such a case?