velero-plugin
velero-plugin copied to clipboard
PV restore时Disk 创建在错误的zone上
What steps did you take and what happened: [A clear and concise description of what the bug is.] velero恢复PV的时候发现阿里云操作日志里有attach disk失败的日志,因为备份的PV和velero阿里云的插件创建出来的disk不在一个zone上,例如我备份的PV在cn-shangh-l, 创建出来的disk却在cn-shanghai-g
What did you expect to happen: 创建的disk和原来备份的PV在同一个zone上
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] 在这行代码:https://github.com/AliyunContainerService/velero-plugin/blob/master/velero-plugin-for-alibabacloud/volume_snapshotter.go#L104
func (b *VolumeSnapshotter) CreateVolumeFromSnapshot(snapshotID, volumeType, volumeAZ string, iops *int64) (volumeID string, err error) {
// describe the snapshot so we can apply its tags to the volume
snapReq := ecs.CreateDescribeSnapshotsRequest()
snapReq.SnapshotIds = getJSONArrayString(snapshotID)
snapRes, err := b.ecs.DescribeSnapshots(snapReq)
if err != nil {
return "", errors.WithStack(err)
}
if count := len(snapRes.Snapshots.Snapshot); count != 1 {
return "", errors.Errorf("expected 1 snapshot from DescribeSnapshots for %s, got %v", snapshotID, count)
}
tags := getTagsForCluster(snapRes.Snapshots.Snapshot[0].Tags.Tag)
volumeAZ, err = getMetaData(metadataZoneKey)
if err != nil {
return "", errors.Errorf("failed to get zone-id, got %v", err)
}
.........
}
调用了getMetadata
来读取了velero pod 所在的instance的metadata, 并且用velero pod所在的instance的zone 覆盖了velero传进来的参数:volumeAZ,所以会导致插件创建的disk和velero传的参数不在一个zone上,是否能删除这行代码来修复这个问题?谢谢
Environment:
- Velero version: (use
ark version
): - Kubernetes version: (use
kubectl version
): - Kubernetes installer & version:
- Cloud provider or hardware configuration:
- OS (e.g. from
/etc/os-release
):