[Docs] `FlyteScopedUserException 'JavaPackage' object is not callable` if spark plugin is not enabled
Description
When executing Spark Task, flyte binary does not enable spark plugin, it will happen...
FlyteScopedUserException: 'JavaPackage' object is not callable
TypeError: Encountered error while executing workflow 'wf':
Error encountered while executing 'wf':
'JavaPackage' object is not callable
But it is difficult to know this from the error message. I think it can be recorded in troubleshooting.
Are you sure this issue hasn't been raised already?
- [X] Yes
Have you read the Code of Conduct?
- [X] Yes
🤖
Hey there, @jasonlai1218! 👋 I'm Dosu, a bot here to lend a hand with bugs, answer your questions, and guide you on becoming a contributor while we wait for a human maintainer to jump in. Just so you know, I'm here to make things a bit smoother for you. I see you've run into an issue with the Spark Task and the FlyteScopedUserException. I'm diving into it now and will get back to you with a more detailed answer shortly!
#self-assign
🤖
Thank you for bringing this to our attention. The error message you're seeing, 'JavaPackage' object is not callable, is indeed not very informative about the underlying issue, which is that the Spark plugin is not enabled.
As you've suggested, it would be beneficial to have this information included in the troubleshooting documentation. This would help users understand the need to enable the Spark plugin when executing Spark tasks.
To enable the Spark plugin, you need to register the plugin with Flyte's plugin machinery. This is done in the init() function. Here is the relevant code snippet:
func init() {
if err := sparkOp.AddToScheme(scheme.Scheme); err != nil {
panic(err)
}
pluginmachinery.PluginRegistry().RegisterK8sPlugin(
k8s.PluginEntry{
ID: sparkTaskType,
RegisteredTaskTypes: []pluginsCore.TaskType{sparkTaskType},
ResourceToWatch: &sparkOp.SparkApplication{},
Plugin: sparkResourceHandler{},
IsDefault: false,
})
}
In this code, sparkOp.AddToScheme(scheme.Scheme) is adding the Spark operator to the scheme. If there is an error, it will panic and stop the execution.
Then, pluginmachinery.PluginRegistry().RegisterK8sPlugin() is called to register the Spark plugin. The k8s.PluginEntry struct is filled with the necessary information:
IDis set tosparkTaskType, which is the identifier for the Spark task type.RegisteredTaskTypesis set to an array containingsparkTaskType, meaning this plugin can handle tasks of typesparkTaskType.ResourceToWatchis set to&sparkOp.SparkApplication{}, meaning this plugin will watch for changes in SparkApplication resources.Pluginis set tosparkResourceHandler{}, which is the handler for Spark resources.IsDefaultis set tofalse, meaning this plugin is not a default plugin.
You can find this code in the flyteplugins/go/tasks/plugins/k8s/spark/spark.go file.
I hope this helps clarify the issue. If you have any further questions, please don't hesitate to ask.
Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.