api
api copied to clipboard
Cannot print deployment
hi , I have some trouble about printing the audit log. In reality, this error blocks my goroutine
code form this
package main
import (
"fmt"
"k8s.io/api/apps/v1"
apiv1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"net/http"
_ "net/http/pprof"
"time"
)
func main(){
service()
deployment()
if err := http.ListenAndServe(":6060", nil);err != nil {
fmt.Println(err)
}
}
func deployment(){
var a int32 = 1
deploy := &v1.Deployment{
ObjectMeta:metav1.ObjectMeta{
Name: "111",
Namespace: "222",
CreationTimestamp: metav1.Time{Time:time.Now()},
//DeletionTimestamp:&metav1.Time{Time:time.Now()},
Labels: map[string]string{
"app":"nginx",
"deploy":"helm",
},
},
Spec:v1.DeploymentSpec{
Replicas: &a,
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"111":"222",
},
},
Template: apiv1.PodTemplateSpec{},
},
}
fmt.Println(fmt.Sprintln("%v",deploy))
}
func service(){
deploy := &apiv1.Service{
ObjectMeta:metav1.ObjectMeta{
Name: "111",
Namespace: "222",
CreationTimestamp: metav1.Time{Time:time.Now()},
Labels: map[string]string{
"app":"nginx",
"deploy":"helm",
},
},
}
fmt.Println(fmt.Sprintln("%v",deploy))
}
and error :
version
require (
k8s.io/api v0.22.2
k8s.io/apimachinery v0.22.2
)
/kind support
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.