adaptive icon indicating copy to clipboard operation
adaptive copied to clipboard

add (example) loss function that can handle sparse band structure calculations

Open basnijholt opened this issue 5 years ago • 2 comments

(original issue on GitLab)

opened by Rafal Skolasinski (@r-j-skolasinski) at 2017-12-08T13:13:46.873Z

typical (problematic) behaviour in such simulations can be mimic with

def levels(x):
    return np.array([x**2 % 1.5, (x**2 + 1) % 1.5])

which looks (with regular sampling) as

2017_12_08_13_48_38

basnijholt avatar Dec 19 '18 16:12 basnijholt

originally posted by Anton Akhmerov (@anton-akhmerov) at 2018-03-10T21:04:28.222Z on GitLab

Related: https://gitlab.kwant-project.org/kwant/kwant/merge_requests/213

basnijholt avatar Dec 19 '18 16:12 basnijholt

Hereby a code that shows the performance difference for using adaptive with sparse diagonalization using adaptive's default loss function (learner1) v.s. a custom loss function abs_min_loss (learner2). Learner2 needs only 30 points to converge, while learner1 needs 1370 points to converge. The plotting code is not included.

import scipy
import scipy.sparse.linalg as sla
import adaptive
adaptive.notebook_extension()
import holoviews as hv
import numpy as np
from functools import partial
import random
from scipy.sparse import identity
np.warnings.filterwarnings('ignore')

def y(a):
    H1 = np.matrix([[1.95,-0.64,0,0],[-0.64,0.1,0,0],[0,0,0.71,-0.19],[0,0,-0.19,-0.12]])
    H2 = np.matrix([[1,-2*0.64,0,0],[-2*0.64,0.3,0,0],[0,0,0.5*0.71,-0.3*0.19],[0,0,-0.3*0.19,-0.12]])
    Ha = a*H1+(1-a)*H2
    Hb = np.kron(np.matrix([[1,0],[0,-1]]),Ha)
    Hc  = scipy.sparse.coo_matrix(Hb)
    E = sla.eigsh(Hc, k=7, sigma=-0*0.162, return_eigenvectors=False)
    return E

learner1 = adaptive.Learner1D(y, bounds=(0, 3))
runner1 = adaptive.Runner(learner1, goal=lambda l: l.loss() < 0.05)

def abs_min_loss(xs, ys):
    from adaptive.learner.learner1D import default_loss
    ys = [np.abs(y).min() for y in ys]
    return default_loss(xs, ys)

loss = abs_min_loss
learner2 = adaptive.Learner1D(y, bounds=(0, 3), loss_per_interval=loss)
runner2 = adaptive.Runner(learner2, goal=lambda l: l.loss() < 0.05)

image

maxhoskam avatar Sep 16 '19 12:09 maxhoskam