nengo-loihi icon indicating copy to clipboard operation
nengo-loihi copied to clipboard

Oscillator is slow

Open hunse opened this issue 6 years ago • 6 comments

With #132, the oscillator actually works fairly well now. One remaining problem is that it oscillates more slowly than expected.

My best explanation is that the interneurons that we need in the feedback connection slow things down, both because they have an additional 5 ms filter, and because they take some time to spike given an input. This means that the feedback signal (which is the future position of the oscillator, driving the oscillator forward) is slower to arrive, so the oscillator spins less quickly.

One way to address this is by trying to make the interneurons less obtrusive. For example, having higher firing rates would mean there's less delay because of time-to-spike (but then we run into issues because of spike-rate aliasing). We could also reduce the filter, but this would make things noisier.

The other approach is to use some of Aaron's stuff to account for these delays/filters when we compute the feedback weights. The tricky thing is that right now we bake these weights right into the oscillator. So if we want to have an Oscillator that works well in both nengo and nengo_loihi, we need to figure out how to allow for different weights in each situation.

figure_1 figure_2

hunse avatar Dec 07 '18 16:12 hunse

Can we verify that interneurons are the culprit? I've been trying to do so with similar experiments, but running into the issue here: https://forum.nengo.ai/t/how-many-neurons-can-be-fully-connected/706 -- Currently trying to hack some sort of work-around where a virtual ensemble is realized by many sub-ensembles that are stacked together in the optimization problem, but having trouble finding the simplest barrier to break through in the code.

arvoelke avatar Dec 07 '18 19:12 arvoelke

Figured out a way to do this (e.g., below is demonstrating 100 ensembles, each containing 5 neurons, all optimized to represent the same "virtual ensemble"):

100x5

import numpy as np
import matplotlib.pyplot as plt

import nengo
from nengo.params import IntParam
from nengo.utils.builder import default_n_eval_points

import nengo_loihi
from nengo_loihi.builder import get_gain_bias, get_samples
from nengo_loihi.neurons import loihi_rates


class VirtualEnsemble(nengo.Network):
    
    n_ensembles = IntParam('n_ensembles', low=1)
    
    def __init__(self, n_ensembles, n_neurons_per_ensemble,
                 intercept_limit=0.95, rng=np.random,
                 label=None, seed=None, add_to_container=None,
                 **ens_kwargs):
        super(VirtualEnsemble, self).__init__(
            label=label, seed=seed, add_to_container=add_to_container)
        
        self.n_ensembles = n_ensembles
        
        for illegal in ('eval_points', 'n_eval_points'):
            if illegal in ens_kwargs:
                raise ValueError("Ensemble parameter '%s' is unsupported" % illegal)

        self.ensembles = []

        with self:
            for _ in range(n_ensembles):
                ens = nengo.Ensemble(n_neurons=n_neurons_per_ensemble, **ens_kwargs)

                gain, bias, max_rates, intercepts = get_gain_bias(
                    ens, rng=rng, intercept_limit=intercept_limit)

                ens.gain = gain
                ens.bias = bias
                ens.max_rates = max_rates
                ens.intercepts = intercepts

                ens.encoders = get_samples(
                    ens.encoders, ens.n_neurons, ens.dimensions, rng=rng)

                self.ensembles.append(ens)

    def add_input(self, pre, **kwargs):
        with self:
            for post in self.ensembles:
                nengo.Connection(pre, post, **kwargs)

    def add_output(self,
                   function=lambda x: x, 
                   eval_points=nengo.dists.UniformHypersphere(surface=False),
                   solver=nengo.solvers.LstsqL2(),
                   dt=0.001,
                   rng=np.random):
        # TODO:
        # - assumes function is a callable

        if not isinstance(eval_points, nengo.dists.Distribution):
            raise TypeError("eval_points (%r) must be a "
                            "nengo.dists.Distribution" % eval_points)
        
        rep = self.ensembles[0]  # representative of all sub-ensembles
        n = rep.n_neurons * self.n_ensembles
        n_points = default_n_eval_points(n, rep.dimensions)
        eval_points = eval_points.sample(n_points, rep.dimensions, rng=rng)
        
        A = np.empty((n_points, n))
        Y = np.asarray([function(ep) for ep in eval_points])
        size_out = Y.shape[1]

        for i, ens in enumerate(self.ensembles):
            x = np.dot(eval_points, ens.encoders.T / ens.radius)
            activities = loihi_rates(ens.neuron_type, x, ens.gain, ens.bias, dt)
            A[:, i*ens.n_neurons:(i+1)*ens.n_neurons] = activities

        D, info = solver(A, Y, rng=rng)  # AD ~ Y
        assert D.shape == (n, size_out)

        with self:
            output = nengo.Node(size_in=size_out)
            for i, ens in enumerate(self.ensembles):
                nengo.Connection(
                    ens.neurons, output, synapse=None,
                    transform=D[i*ens.n_neurons:(i+1)*ens.n_neurons, :].T)

        return output, info

n_ensembles = 100
n_neurons_per_ensemble = 5
tau_probe = 0.005

with nengo.Network() as model:
    u = nengo.Node(lambda t: np.sin(2*np.pi*t))
    
    vens = VirtualEnsemble(
        n_ensembles=n_ensembles,
        n_neurons_per_ensemble=n_neurons_per_ensemble,
        dimensions=1)
    
    vens.add_input(u, synapse=None)
    x_hat, info = vens.add_output()
    # vens.add_input(x_hat, synapse=tau_recurrent)

    p = nengo.Probe(x_hat, synapse=tau_probe)

with nengo_loihi.Simulator(model, precompute=True) as sim:
    sim.run(2.0)

plt.figure()
plt.title(r"%s Sub-Ensembles $\times$ %d Neurons" % (
    n_ensembles, n_neurons_per_ensemble))
plt.plot(sim.trange(), sim.data[p])
plt.xlabel("Time (s)")
plt.show()

However, as soon as I try to recurrently connect it to itself (e.g., by vens.add_input(x_hat, ...)), I get the following error:

---> 26     sim.run(2.0)

~/CTN/nengo-loihi/nengo_loihi/simulator.py in run(self, time_in_seconds)
    558             logger.info("Running %s for %f seconds, or %d steps",
    559                         self.model.label, time_in_seconds, steps)
--> 560             self.run_steps(steps)
    561 
    562     def step(self):

~/CTN/nengo-loihi/nengo_loihi/simulator.py in run_steps(self, steps)
    720             self._make_run_steps()
    721         try:
--> 722             self._run_steps(steps)
    723         except Exception:
    724             if "loihi" in self.sims and self.sims["loihi"].use_snips:

~/CTN/nengo-loihi/nengo_loihi/simulator.py in emu_precomputed_host_pre_only(steps)
    633                     host_pre.run_steps(steps)
    634                     self._host2chip(emulator)
--> 635                     emulator.run_steps(steps)
    636                 self._run_steps = emu_precomputed_host_pre_only
    637 

~/CTN/nengo-loihi/nengo_loihi/loihi_cx.py in run_steps(self, steps)
   1075         """
   1076         for _ in range(steps):
-> 1077             self.step()
   1078 
   1079     def _filter_probe(self, cx_probe, data):

~/CTN/nengo-loihi/nengo_loihi/loihi_cx.py in step(self)
   1002 
   1003                     weights, indices = synapses.axon_weights_indices(
-> 1004                         spike.axon_id, atom=spike.atom)
   1005                     qb[0, cx_base + indices] += weights
   1006 

~/CTN/nengo-loihi/nengo_loihi/loihi_cx.py in axon_weights_indices(self, axon_idx, atom)
    455     def axon_weights_indices(self, axon_idx, atom=0):
    456         weight_idx = self.axon_weight_idx(axon_idx)
--> 457         w = self.weights[weight_idx]
    458         i = self.indices[weight_idx]
    459         return w[atom, :], i[atom, :]

IndexError: list index out of range

arvoelke avatar Dec 07 '18 20:12 arvoelke

Yeah, that error definitely shouldn't happen.

Even so, I'm not sure if your script would do what you want. x_hat is a Node, and since you probe it, it can't be removed as a passthrough node and must be done off-chip. That's going to mean weights for any connections going into it will also be off-chip.

I think verifying the effects of interneurons here is a good idea, but I would go about it the opposite way: make a pure nengo model with interneurons on the feedback connection to the oscillator. #132 makes the interneuron construction much clearer, so it shouldn't be too hard to make equivalent interneurons in nengo. I'm happy to look into that.

hunse avatar Dec 07 '18 20:12 hunse

Even so, I'm not sure if your script would do what you want. x_hat is a Node, and since you probe it, it can't be removed as a passthrough node and must be done off-chip. That's going to mean weights for any connections going into it will also be off-chip.

Ah, right. What about:

    vens.add_input(u, synapse=tau, transform=tau)
    x_hat, info = vens.add_output()
    vens.add_input(x_hat, synapse=tau)

    x_hat_copy, info = vens.add_output()
    p = nengo.Probe(x_hat_copy, synapse=tau_probe)

So that vens -> x_hat -> vens is collapsed as a passthrough, and then only the readout vens -> x_hat_copy -> p is done off chip?

arvoelke avatar Dec 07 '18 20:12 arvoelke

but I would go about it the opposite way:

A motivating factor in doing it this way is that I'd like to experiment with scaling the integrator in Loihi up to at least 1,024 neurons, which currently I can't do otherwise without blowing through the per-core memory, or using interneurons. With the above VirtualEnsemble I could, but I'm still getting that IndexError: list index out of range. I will try to isolate this error with a minimal reproducer and then post that as a separate issue.

arvoelke avatar Dec 07 '18 20:12 arvoelke

I isolated the error in #152. The other error (now removed via an edit) was due to an oversight in my code: get_gain_bias uses a default intercept_limit of 1 instead of the 0.95 that the builder usually passes in. Passing in 0.95 explicitly avoids the error, and keeps the performance independent of the chosen partitioning. I've edited that post now to limit confusion. #152 is the only hurdle to scaling up large all-to-all recurrent ensembles, from what I can see now.

arvoelke avatar Dec 08 '18 03:12 arvoelke