Zhaozhou Li
Zhaozhou Li
Thank you Luke for the hard work! Look forward to the fix. BTW, one more example about incorrect padding (too large) in the third panel below. From your description, I...
kM-Stone's solution does not work properly for some other projections such as 'gnom'. ``` ---> 16 ax1.set_xticks([30, 60, 120, 180, 240, 300, 350], crs=ccrs.PlateCarree()) 17 ax1.set_yticks([0, 20, 40], crs=ccrs.PlateCarree()) 18...
FYI, one can get rid of the warnings by specifying ytickminor to anything. Any of these works for the above example. ``` ax.altx(ytickminor='log') ax.altx(ytickminor=True) ax.altx(ytickminor=False) ```
I guess this is what you need ``` axs.format(xlim=(1, 1e4), xscale='log', xformatter='log') ``` Try using 'log' instead of 'sci'.
Related issue https://github.com/numba/numba/issues/5275
Try set `p.subplots(..., sharey=False, ...)`?
Thanks for the information. I'll further look at the code of `filesystemadaptor.js`. I found a related issue, https://github.com/Jermolene/TiddlyWiki5/issues/2558
Check `gp.kernel_` instead of `gp.kernel` for gp in res.models. You should see the params in `gp.kernel_` are updated.
按道理来说 gpt-3.5-turbo-16k 应该是支持长文本的,为什么翻译的时候总是提示超出字数限制? 这个限制是 openai 的限制还是 app 的限制?
所以理论上应该根据选择的模型来决定这个 max_tokens 我对 ts 语言没了解,等熟悉的大佬来修改啦!