TensorComprehensions icon indicating copy to clipboard operation
TensorComprehensions copied to clipboard

Issues with EBNF (grammar)

Open arogozhnikov opened this issue 6 years ago • 6 comments

Hi again. First, here is how grmmar currently looks in the documentation

num ::= <number literal with C syntax>
id ::= [_a-zA-Z0-9]*[_a-zA-Z][_a-zA-Z0-9]*
exp ::= num
      | ( '-' | '!' | ... ) exp
      | exp ( [+-*/%] | '==' | '!=' | '<=' | ... ) exp
      | exp '?' exp ':' exp
      | id '.' num # range of num-th dimension of id
      | id '(' exp_list ')' # builtin call or tensor access

reduction ::= <associative reduction operator>
            | '+='  | '*='  | 'min='  | 'max='
            | '+=!' | '*=!' | 'min=!' | 'max=!'

range_constraint ::= id 'in' exp ':' exp

stmt ::= id '(' id_list ')' [ '=' | reduction ] exp
           [ 'where' range_constraint_list ]
       | id_list = id '('id_list ')' # TC function call

arg ::= type id
return ::= id # inferred return type and range

scalar_type ::= 'double' | 'float' | 'half'
              | 'int' | 'byte' | 'uint32' | ...

type ::= scalar_type [ '(' id_list ')' ]

func ::= # TC function definition
  'def' id '(' arg_list ')' '->' '(' return_list ')' '{'
    stmt_list
  '}'

id_list ::= <comma separated id list>
exp_list ::= <comma separated exp list>
arg_list ::= <comma separated arg list>
stmt_list ::= <whitespace separated stmt list>
return_list ::= <comma separated return list>
range_constraint_list ::= <non-empty comma separated
                           range_constraint list>

Some problems I've faced

  1. [+-*/%] - I wasn't able to use % (expected a valid token but found '%' here). And didnt' find in the docs, neither I was able to replace it with any CUDA ops). Suggestions?
  2. | 'int' | 'byte' | 'uint32' | ... - int32 here
  3. reduction ::= <associative reduction operator> ... I understand this as another option for reduction, but <associative reduction operator> is never defined. Shouldn't this be just a comment?
  4. id '.' num # range of num-th dimension of id. wasn't able to use this either. Maybe an example would help.

arogozhnikov avatar Mar 30 '18 09:03 arogozhnikov

And, according to grammar example below is impossible :)

def cast(float(M,N) A) -> (int32(M,N) O1) {{

arogozhnikov avatar Mar 30 '18 12:03 arogozhnikov

And, according to grammar example below is impossible

Indeed, that extension is a part of python bindings and is not a part of TC language.

https://facebookresearch.github.io/TensorComprehensions/framework/pytorch_integration/writing_layers.html#writing-layers-with-scalars

ftynse avatar Mar 30 '18 12:03 ftynse

@ftynse not sure we got each other.

  • I believe, you're talking about double brackets {{
  • I'm pointing that specification of output shape and type (int32(M,N) O1) is not included in grammar

arogozhnikov avatar Mar 30 '18 12:03 arogozhnikov

Then it's better to specify what exactly you are pointing out.

Indeed, output sizes are always inferred rather than specified.

ftynse avatar Mar 30 '18 13:03 ftynse

[+-*/%] - I wasn't able to use % (expected a valid token but found '%' here). And didnt' find in the docs, neither I was able to replace it with any CUDA ops). Suggestions?

Probably not implemented yet, thanks.

| 'int' | 'byte' | 'uint32' | ... - int32 here

Yes, and probably int8 instead of byte for consistency.

reduction ::= ... I understand this as another option for reduction, but is never defined. Shouldn't this be just a comment?

Yes

id '.' num # range of num-th dimension of id.

Not implemented either.

ftynse avatar Mar 30 '18 13:03 ftynse

Hi @arogozhnikov , thanks for bringing this to attention. Indeed we should modify the documentation and display what works. Assigning this to myself to reflect these changes in docs :) I'll also check with @zdevito and try to setup the test cases.

prigoyal avatar Mar 30 '18 20:03 prigoyal