spikeinterface icon indicating copy to clipboard operation
spikeinterface copied to clipboard

SpikeInterface not found in docker container

Open bxy666666 opened this issue 6 months ago • 3 comments

Image Hello, I have been using this project quite a long time, but I met this problem a few times. Could you tell me how can I solve it?

bxy666666 avatar May 09 '25 10:05 bxy666666

Can you paste the entire command?

alejoe91 avatar May 09 '25 12:05 alejoe91

你能粘贴整个命令吗?

sorting_KS2 = ss.run_sorter( sorter_name="kilosort2", recording=recording_preprocessed, extra_requirements=["numpy==1.26"], docker_image=True, verbose=True, output_folder=kilosort2_output_folder_path ) # 提取波形 we_KS2 = si.extract_waveforms( recording=recording_preprocessed, sorting=sorting_KS2, folder=waveforms_folder_path, overwrite=None )

# **保存每个unit的波形为 .npy 文件**
for unit_id in we_KS2.unit_ids:
    # 获取当前 unit 的波形数据
    waveforms = we_KS2.get_waveforms(unit_id)

    # 构造文件名,例如 "unit_1_waveforms.npy"
    npy_filename = f"unit_{unit_id}_waveforms.npy"
    npy_filepath = os.path.join(npy_waveform_folder, npy_filename)

    # 保存波形为 .npy 文件
    np.save(npy_filepath, waveforms)
    print(f"Unit {unit_id} 的波形数据已保存到 {npy_filepath}")

# 计算各种指标
print("计算各种指标")
amplitudes = spost.compute_spike_amplitudes(we_KS2)
unit_locations = spost.compute_unit_locations(we_KS2)
spike_locations = spost.compute_spike_locations(we_KS2)
correlograms, bins = spost.compute_correlograms(we_KS2)
similarity = spost.compute_template_similarity(we_KS2)
ISI = spost.compute_isi_histograms(we_KS2, window_ms=100.0, bin_ms=2.0, method="auto")
metric = spost.compute_template_metrics(we_KS2, include_multi_channel_metrics=True)
print("计算各种指标 结束")

# 计算质量指标
qm_params = sqm.get_default_qm_params()
qm_params["presence_ratio"]["bin_duration_s"] = 1
qm_params["amplitude_cutoff"]["num_histogram_bins"] = 5
qm_params["drift"]["interval_s"] = 2
qm_params["drift"]["min_spikes_per_interval"] = 2
qm = sqm.compute_quality_metrics(we_KS2, qm_params=qm_params)

# 获取所有单元的脉冲列车数据并保存
spike_trains = {}
for unit_id in sorting_KS2.unit_ids:
    spike_train = sorting_KS2.get_unit_spike_train(unit_id, start_frame=None, end_frame=None)
    spike_trains[unit_id] = spike_train

# 保存到 Numpy 文件
np.save(os.path.join(data_location, f'aligned_spike_trains_{suffix}.npy'), spike_trains)

# 加载数据
loaded_spike_trains = np.load(os.path.join(data_location, f'aligned_spike_trains_{suffix}.npy'),
                              allow_pickle=True).item()

# 保存到 CSV 文件
data = []
for unit_id, spike_train in spike_trains.items():
    for spike in spike_train:
        data.append([unit_id, spike])

df = pd.DataFrame(data, columns=['unit_id', 'spike_time'])
df.to_csv(os.path.join(data_location, f'aligned_spike_trains_{suffix}.csv'), index=False)

this is my code, and the error in the figure. I find that everytime run this code, there generate a new container, why comes that?

Image

bxy666666 avatar May 09 '25 13:05 bxy666666

Hi @bxy666666

I see you're using a quite old spikeinterface version, so it would be hard to debug and solve the problem. Can you try to upgrade your code to the latest version?

You can use this guide to change the postprocessing part: https://spikeinterface.readthedocs.io/en/latest/tutorials/waveform_extractor_to_sorting_analyzer.html

alejoe91 avatar May 12 '25 08:05 alejoe91