c icon indicating copy to clipboard operation
c copied to clipboard

EventsV1API_patchNamespacedEvent returns error - cannot unmarshal object into Go value of type jsonpatch.Patch

Open smuraligit opened this issue 3 years ago • 1 comments

Hello,

Hope you are well.

I am trying to patch the event with event_series structure. This is the code I am having,

// Update event
                cJSON *jsonObj = events_v1_event_series_convertToJSON(evt_v1->series);
                object_t *patchBody = object_parseFromJSON(jsonObj);
                int forceUpdate = 0;
                EventsV1API_patchNamespacedEvent(qEntry_proc->apiClient, qEntry_proc->evt_v1->metadata->name, qEntry_proc->_namespace, patchBody, NULL, NULL, NULL, NULL, forceUpdate);`

But the after apiClient_invoke, I am seeing this error in cJSON_Parse(),

'cannot unmarshal object into Go value of type jsonpatch.Patch'

Looks like, the patch body that I am passing to EventsV1API_patchNamespacedEvent() is not in the proper format as jsonpatch.Patch. On investigating this further, I see that the jsonpatch.Patch should look like,

[ { "op": "replace", "path": "/baz", "value": "boo" }, ] for an example json { "baz": "qux", "foo": "bar" }

Any idea, how should I transform the below event series json buffer into jsonpatch format,

"{\n\t"count":\t1,\n\t"lastObservedTime":\t"2022-10-21T22:51:30.000000Z"\n}"

If I'm wrong, please correct me as to what is the correct usage for EventsV1API_patchNamespacedEvent().

Thank you very much!

smuraligit avatar Oct 21 '22 23:10 smuraligit

Can you do it manully first with kubectl patch ? During this process you need create a patch yaml/xml file. And finally, convert your patch file to json string using cJSON library.

ityuhui avatar Oct 24 '22 08:10 ityuhui

Hi @ityuhui , I also met the same issue when I used CoreV1API_patchNodeStatus. I think the root causes of the 2 APIs are the same. As below code shows, more than one "HeaderType" and "ContentType" are added to the HTTP header.

list_addElement(localVarHeaderType,"application/json"); //produces
list_addElement(localVarHeaderType,"application/yaml"); //produces
list_addElement(localVarHeaderType,"application/vnd.kubernetes.protobuf"); //produces
list_addElement(localVarContentType,"application/json-patch+json"); //consumes
list_addElement(localVarContentType,"application/merge-patch+json"); //consumes
list_addElement(localVarContentType,"application/strategic-merge-patch+json"); //consumes
list_addElement(localVarContentType,"application/apply-patch+yaml"); //consumes

But according the RFC: https://www.rfc-editor.org/rfc/rfc7230#section-3.2.2

A sender MUST NOT generate multiple header fields with the same field name in a message unless either the entire field value for that header field is defined as a comma-separated list [i.e., #(values)] or the header field is a well-known exception (as noted below).

So I changed above code a bit as below

list_addElement(localVarHeaderType,"application/json"); //produces
// list_addElement(localVarHeaderType,"application/yaml"); //produces
// list_addElement(localVarHeaderType,"application/vnd.kubernetes.protobuf"); //produces
// list_addElement(localVarContentType,"application/json-patch+json"); //consumes
// list_addElement(localVarContentType,"application/merge-patch+json"); //consumes
list_addElement(localVarContentType,"application/strategic-merge-patch+json"); //consumes
// list_addElement(localVarContentType,"application/apply-patch+yaml"); //consumes

It works for me. Is it possible to add arguments for these patch-related API to configure "HeaderType" and "ContentType".

@smuraligit, you may have a try to see whether it can resolve your problem.

Thanks!

sarh2o avatar Dec 29 '22 01:12 sarh2o

Thank you. I'll take a look soon.

ityuhui avatar Dec 29 '22 02:12 ityuhui

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Mar 29 '23 02:03 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Apr 28 '23 03:04 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar May 28 '23 03:05 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar May 28 '23 03:05 k8s-ci-robot