Memory functionality of `Callable` Builder API
class quantum_ansatz:
def __init__(self,backend):
self.job_tape = {}
self.backend = backend
def ansatz(self,**kwargs):
return start
def submit(self,quantum_parameters:dict):
key = tuple(quantum_parameters.items())
# Check if the job has been done before
if "job_tape" in self.__dict__:
if key in self.job_tape:
return self.job_tape[key]
# Else, submit
if self.backend["quantum"] == True:
job = self.ansatz(quantum_parameters).braket(self.backend["num_shots"])
elif self.backend["quantum"] == False:
job = self.ansatz(quantum_parameters).braket_local_simulator(self.backend["num_shots"])
else:
raise BaseException("Bad Backend")
submitted_job = job.submit()
# Save to tape
key = tuple(quantum_parameters.items())
if "job_tape" not in self.__dict__:
self.job_tape = {key:submitted_job}
else:
self.job_tape[key] = submitted_job
#self.to_file("tape.json")
return submitted_job
def get_bitstrings(self,quantum_parameters:dict):
"""
Collect results
"""
job = self.submit(quantum_parameters)
return np.array(job.report().bitstrings)
This structure, of being able to submit multiple "batch jobs" and collect them within a single object, should be native to bloqade.
The above code is what I need to use for hybrid jobs, where I build a different class inhereted from this one and overwrite the ansatz class
As of now, doing a batch_submit creates a new object, and there is no way to merge them back together and have this "shared memory" of previously-executed tasks
can you provide a more concrete example on how you actually use this class for? I don't know what you meant here, are you trying to save previously run task results? might in dup of #315
oh so you want to cache previous result so for same parameters it returns the old result instead of submitting new task? how do you guarantee floating point are equivalent in this case?
# Generate a UDG graph
N = 16 # Side length
k = 4 # Number of clusters
x,y = meshgrid(range(N),range(N))
pos = array([x.flatten(),y.flatten()]).T
perm = random.permutation(N**2)
pos = pos[perm[0:int(N**2*.7)]]
# Specify a problem instance
problem = clustering(get_UDG(pos,0.72),pos,k)
solution_q = quantum_clustering(problem,backend = {"quantum":False,"num_shots":1000})
# get_solution calls get_bitstrings
candidate_solution = solution_q.get_solution({"times":(0.1,0.5,),"detuning":(0,0)})
problem.show(candidate_solution)
so here your key is?
In this case, key is the (hashable) version of {"times":(0.1,0.5,),"detuning":(0,0)}. It would be much better if it were two keys (for the batch task) of {"times":0.1,"detuning":0} and {"times":0.5,"detuning":0}
I do hear you with floating point problems; perhaps a better key would be the job itself, and using an __eq__ overload
So then this tape becomes something like:
if job in self.tape:
return self.tape[job]
else:
self.tape[job] = job.submit()
We basically need to overload __hash__ and __eq__ for Job objects.
I think some of the functionality here can be implemented in the new Callable objects. The idea being that the Callable object can keep a history of all previous calls and store the results as a Dataframe adding extra metadata that includes not just the parameters that have been called but also which call number that belongs to as well as the actual shot results.
Also when we eventually serialize the bloqade IR, we can also serialize this object to basically checkpoint the hybrid job later on.
I think some of the functionality here can be implemented in the new Callable objects.
Yes, it's called "memorization" (here is a random google example), mostly used for reducing latencies and costs of servers, which I think is exactly what you are looking for.
The way of implementing it I think in our case is providing an option cache which turns on
a call convention that the generated Callable will always memorize. I'm not sure querying different past programs is what you actually want, but I'd suggest we design that part after memorization is implemented.