#3127 Add log sum exp func
Summary
As discussed in issue #3127, I've implemented a log sum exp function that computes the logarithm of the element-wise sum of exponentials and returns the result as a container.
I've implemented 3 files + a test file in both prim, rev, and fwd, handling different edge cases that may arise.
Tests
The tests are written in the following file path, I modelled them after the tests for log_sum_exp as a base example:
test/unit/math/mix/fun/log_add_exp_test.cpp
Side Effects
None
Release notes
log_add_exp will be available if merged
Checklist
-
[ ] Copyright holder: (fill in copyright holder information)
The copyright holder is typically you or your assignee, such as a university or company. By submitting this pull request, the copyright holder is agreeing to the license the submitted work under the following licenses: - Code: BSD 3-clause (https://opensource.org/licenses/BSD-3-Clause) - Documentation: CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)
-
[ ] the basic tests are passing
- unit tests pass (to run, use:
./runTests.py test/unit) - header checks pass, (
make test-headers) - dependencies checks pass, (
make test-math-dependencies) - docs build, (
make doxygen) - code passes the built in C++ standards checks (
make cpplint)
- unit tests pass (to run, use:
-
[ ] the code is written in idiomatic C++ and changes are documented in the doxygen
-
[ ] the new changes are tested
| Name | Old Result | New Result | Ratio | Performance change( 1 - new / old ) |
|---|---|---|---|---|
| arma/arma.stan | 0.33 | 0.32 | 1.03 | 2.52% faster |
| low_dim_corr_gauss/low_dim_corr_gauss.stan | 0.01 | 0.01 | 1.0 | -0.06% slower |
| gp_regr/gen_gp_data.stan | 0.03 | 0.03 | 1.03 | 2.81% faster |
| gp_regr/gp_regr.stan | 0.1 | 0.09 | 1.03 | 2.73% faster |
| sir/sir.stan | 70.52 | 70.01 | 1.01 | 0.72% faster |
| irt_2pl/irt_2pl.stan | 4.29 | 4.14 | 1.04 | 3.42% faster |
| eight_schools/eight_schools.stan | 0.06 | 0.06 | 0.99 | -0.59% slower |
| pkpd/sim_one_comp_mm_elim_abs.stan | 0.25 | 0.25 | 1.01 | 1.1% faster |
| pkpd/one_comp_mm_elim_abs.stan | 19.41 | 19.63 | 0.99 | -1.13% slower |
| garch/garch.stan | 0.42 | 0.41 | 1.03 | 2.55% faster |
| low_dim_gauss_mix/low_dim_gauss_mix.stan | 2.68 | 2.62 | 1.02 | 2.42% faster |
| arK/arK.stan | 1.81 | 1.75 | 1.03 | 3.0% faster |
| gp_pois_regr/gp_pois_regr.stan | 2.84 | 2.74 | 1.04 | 3.44% faster |
| low_dim_gauss_mix_collapse/low_dim_gauss_mix_collapse.stan | 8.87 | 8.47 | 1.05 | 4.47% faster |
| performance.compilation | 181.63 | 185.61 | 0.98 | -2.19% slower |
| Mean result: 1.017452740630533 |
Jenkins Console Log Blue Ocean Commit hash: 994a6ce2385990c2ed28f9822e090202d7801a42
Machine information
No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.3 LTS Release: 20.04 Codename: focalCPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 80 On-line CPU(s) list: 0-79 Thread(s) per core: 2 Core(s) per socket: 20 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz Stepping: 4 CPU MHz: 2400.000 CPU max MHz: 3700.0000 CPU min MHz: 1000.0000 BogoMIPS: 4800.00 Virtualization: VT-x L1d cache: 1.3 MiB L1i cache: 1.3 MiB L2 cache: 40 MiB L3 cache: 55 MiB NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79 Vulnerability Gather data sampling: Mitigation; Microcode Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Mitigation; IBRS Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke md_clear flush_l1d arch_capabilities
G++: g++ (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0 Copyright (C) 2019 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Clang: clang version 10.0.0-4ubuntu1 Target: x86_64-pc-linux-gnu Thread model: posix InstalledDir: /usr/bin
What about the row wise or column wise summation of matrices? Would that get a different function name?
Just for my understanding: The intention of the function is to add two containers a and b using the log-sum-exp thing and returning again a container, right?
The intention of the function is to add two containers a and b using the log-sum-exp thing and returning again a container, right?
Correct, that is what @spinkney requested at least
Ok my apologies, I think i misinterpreted the original issue, I’ll work it on it tonight and incorporate the log_sum_exp into log_add_exp and ensure the tests reflect the change of only returning containers, should I assume the user is only allowed to pass in containers a and b, or should I account for different data types passed in. Thank you
I think the only requirement should be that the containers passed must be the same size, but if one is a row vector and the other is a column vector I would expect that to be fine, I think?
Should I account for multi-dimensional matrix containers or assume that only vectors or 1 dimensional matrix's are accepted into the method regardless if they are column or row-wise?
I think the most useful thing would probably be to work over arbitrary containers of the same size. I think the apply_scalar_binary helpers may automate that
I think the most useful thing would probably be to work over arbitrary containers of the same size. I think the apply_scalar_binary helpers may automate that
Been trying to implement this, however, I'm facing an issue where if I'm given 2 different matrix types in which the dimensions don't align, I am not able to pass them into apply_scalar_binary, I have tried a few solutions by transposing the matrix and casting it to ensure its type doesn't change, yet that didn't work as it caused vectors to start failing as a result. Perhaps we should restrict matrix containers if they are not both the same size and have the same containers? Or is there a solution I can attempt, that I haven't tried yet?
I may be misunderstanding, but I think if both arguments are matricies I think it is correct to require them to be the same size. We don’t generally do any “broadcasting” besides maybe allowing a scalar and a container
Ok, thank you, in that case, I'll throw an exception if two matrix's passed and their dimensions are not aligned.
I believe check_matching_dims does what we’d want it to do for that purpose
Finished implementing the above changes, to clarify though, I understand this method will be used in the row-wise or column-wise summation of matrices. Will this be done outside of the method based on how the user utilizes it (passing in pairs of rows and columns)? At the moment, the code can add two containers a and b using the log-sum-exp method and returns a container of the same type as the parameters passed in. Any matrices that are passed in have element-wise operations performed on them before returning a container with the result of the same type.
| Name | Old Result | New Result | Ratio | Performance change( 1 - new / old ) |
|---|---|---|---|---|
| arma/arma.stan | 0.33 | 0.3 | 1.09 | 8.23% faster |
| low_dim_corr_gauss/low_dim_corr_gauss.stan | 0.01 | 0.01 | 1.06 | 5.22% faster |
| gp_regr/gen_gp_data.stan | 0.03 | 0.03 | 1.04 | 3.43% faster |
| gp_regr/gp_regr.stan | 0.09 | 0.09 | 1.07 | 6.67% faster |
| sir/sir.stan | 70.32 | 70.31 | 1.0 | 0.01% faster |
| irt_2pl/irt_2pl.stan | 4.21 | 4.31 | 0.98 | -2.28% slower |
| eight_schools/eight_schools.stan | 0.06 | 0.06 | 1.04 | 4.2% faster |
| pkpd/sim_one_comp_mm_elim_abs.stan | 0.25 | 0.25 | 0.99 | -0.92% slower |
| pkpd/one_comp_mm_elim_abs.stan | 19.53 | 19.66 | 0.99 | -0.66% slower |
| garch/garch.stan | 0.43 | 0.47 | 0.92 | -8.71% slower |
| low_dim_gauss_mix/low_dim_gauss_mix.stan | 2.76 | 2.59 | 1.06 | 6.02% faster |
| arK/arK.stan | 1.87 | 1.71 | 1.1 | 8.81% faster |
| gp_pois_regr/gp_pois_regr.stan | 2.82 | 2.67 | 1.05 | 5.19% faster |
| low_dim_gauss_mix_collapse/low_dim_gauss_mix_collapse.stan | 8.87 | 8.38 | 1.06 | 5.45% faster |
| performance.compilation | 179.97 | 179.98 | 1.0 | -0.01% slower |
| Mean result: 1.0300433622507947 |
Jenkins Console Log Blue Ocean Commit hash: 994a6ce2385990c2ed28f9822e090202d7801a42
Machine information
No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.3 LTS Release: 20.04 Codename: focalCPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 80 On-line CPU(s) list: 0-79 Thread(s) per core: 2 Core(s) per socket: 20 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz Stepping: 4 CPU MHz: 2400.000 CPU max MHz: 3700.0000 CPU min MHz: 1000.0000 BogoMIPS: 4800.00 Virtualization: VT-x L1d cache: 1.3 MiB L1i cache: 1.3 MiB L2 cache: 40 MiB L3 cache: 55 MiB NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79 Vulnerability Gather data sampling: Mitigation; Microcode Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Mitigation; IBRS Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke md_clear flush_l1d arch_capabilities
G++: g++ (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0 Copyright (C) 2019 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Clang: clang version 10.0.0-4ubuntu1 Target: x86_64-pc-linux-gnu Thread model: posix InstalledDir: /usr/bin
Sorry for my delay in looking at this. Bit of a busy week but I will try to look on Friday. Glancing at everything I didn't see anything that was a huge blocker
Apologies for coming to this late, and this might be me misunderstanding the PR, but don't we already have a vectorised binary version of log_sum_exp?
@andrjohns the existing function returns a scalar, the feature request was for something that worked element wise and returned a container
@andrjohns the existing function returns a scalar, the feature request was for something that worked element wise and returned a container
Isn't that what the vectorised binary version is doing though? Applying the function elementwise to two containers and returning a container?
I checked the current exposed signatures for log_sum_exp and the elementwise container returns are already present:
andrew@Andrews-MacBook-Air bin % ./stanc --dump-stan-math-signatures | grep log_sum_exp
log_sum_exp(int, int) => real
log_sum_exp(int, real) => real
log_sum_exp(real, int) => real
log_sum_exp(real, real) => real
log_sum_exp(vector) => real
log_sum_exp(row_vector) => real
log_sum_exp(matrix) => real
log_sum_exp(array[] real) => real
log_sum_exp(int, vector) => vector
log_sum_exp(real, vector) => vector
log_sum_exp(vector, int) => vector
log_sum_exp(vector, real) => vector
log_sum_exp(vector, vector) => vector
log_sum_exp(int, row_vector) => row_vector
log_sum_exp(real, row_vector) => row_vector
log_sum_exp(row_vector, int) => row_vector
log_sum_exp(row_vector, real) => row_vector
log_sum_exp(row_vector, row_vector) => row_vector
log_sum_exp(int, matrix) => matrix
log_sum_exp(real, matrix) => matrix
log_sum_exp(matrix, int) => matrix
log_sum_exp(matrix, real) => matrix
log_sum_exp(matrix, matrix) => matrix
...
I was also under the impression that this always returned a real. I can test those other cases.
The log_add_exp is more of an apply type operation. The signatures you posted above get most of the way there but I'd also want to include array of vectors/matrices.
For example
log_sum_exp(array[] real, array[] real) => array[] real
log_sum_exp(array[] matrix, array[] matrix) => array[] matrix
log_sum_exp(array[] row_vector, array[] row_vector) => array[] row_vector
log_sum_exp(array[] vector, array[] vector) => array[] vector
Though this could be accomplished with an apply type function in the language.
It would be nice to have tuples with one call as well
log_sum_exp(tuple(real, real, ...)) => real
log_sum_exp(tuple(matrix, matrix, ...)) => matrix
log_sum_exp(tuple(row_vector, row_vector, ...)) => row_vector
log_sum_exp(tuple(vector, vector, ...)) => vector
The log_add_exp is more of an apply type operation. The signatures you posted above get most of the way there but I'd also want to include array of vectors/matrices.
It also covers those as well (the magic of apply_scalar_binary, to toot my own horn), I just truncated the output:
./stanc --dump-stan-math-signatures | grep log_sum_exp
log_sum_exp(int, int) => real
log_sum_exp(int, real) => real
log_sum_exp(real, int) => real
log_sum_exp(real, real) => real
log_sum_exp(vector) => real
log_sum_exp(row_vector) => real
log_sum_exp(matrix) => real
log_sum_exp(array[] real) => real
log_sum_exp(int, vector) => vector
log_sum_exp(real, vector) => vector
log_sum_exp(vector, int) => vector
log_sum_exp(vector, real) => vector
log_sum_exp(vector, vector) => vector
log_sum_exp(int, row_vector) => row_vector
log_sum_exp(real, row_vector) => row_vector
log_sum_exp(row_vector, int) => row_vector
log_sum_exp(row_vector, real) => row_vector
log_sum_exp(row_vector, row_vector) => row_vector
log_sum_exp(int, matrix) => matrix
log_sum_exp(real, matrix) => matrix
log_sum_exp(matrix, int) => matrix
log_sum_exp(matrix, real) => matrix
log_sum_exp(matrix, matrix) => matrix
log_sum_exp(int, array[] int) => array[] real
log_sum_exp(int, array[] real) => array[] real
log_sum_exp(real, array[] int) => array[] real
log_sum_exp(real, array[] real) => array[] real
log_sum_exp(array[] int, int) => array[] real
log_sum_exp(array[] int, real) => array[] real
log_sum_exp(array[] int, array[] int) => array[] real
log_sum_exp(array[] real, int) => array[] real
log_sum_exp(array[] real, real) => array[] real
log_sum_exp(array[] real, array[] real) => array[] real
log_sum_exp(int, array[] vector) => array[] vector
log_sum_exp(real, array[] vector) => array[] vector
log_sum_exp(array[] vector, int) => array[] vector
log_sum_exp(array[] vector, real) => array[] vector
log_sum_exp(array[] vector, array[] vector) => array[] vector
log_sum_exp(int, array[] row_vector) => array[] row_vector
log_sum_exp(real, array[] row_vector) => array[] row_vector
log_sum_exp(array[] row_vector, int) => array[] row_vector
log_sum_exp(array[] row_vector, real) => array[] row_vector
log_sum_exp(array[] row_vector, array[] row_vector) => array[] row_vector
log_sum_exp(int, array[] matrix) => array[] matrix
log_sum_exp(real, array[] matrix) => array[] matrix
log_sum_exp(array[] matrix, int) => array[] matrix
log_sum_exp(array[] matrix, real) => array[] matrix
log_sum_exp(array[] matrix, array[] matrix) => array[] matrix
log_sum_exp(int, array[,] int) => array[,] real
log_sum_exp(int, array[,] real) => array[,] real
log_sum_exp(real, array[,] int) => array[,] real
log_sum_exp(real, array[,] real) => array[,] real
log_sum_exp(array[,] int, int) => array[,] real
log_sum_exp(array[,] int, real) => array[,] real
log_sum_exp(array[,] int, array[,] int) => array[,] real
log_sum_exp(array[,] real, int) => array[,] real
log_sum_exp(array[,] real, real) => array[,] real
log_sum_exp(array[,] real, array[,] real) => array[,] real
log_sum_exp(int, array[,] vector) => array[,] vector
log_sum_exp(real, array[,] vector) => array[,] vector
log_sum_exp(array[,] vector, int) => array[,] vector
log_sum_exp(array[,] vector, real) => array[,] vector
log_sum_exp(array[,] vector, array[,] vector) => array[,] vector
log_sum_exp(int, array[,] row_vector) => array[,] row_vector
log_sum_exp(real, array[,] row_vector) => array[,] row_vector
log_sum_exp(array[,] row_vector, int) => array[,] row_vector
log_sum_exp(array[,] row_vector, real) => array[,] row_vector
log_sum_exp(array[,] row_vector, array[,] row_vector) => array[,] row_vector
log_sum_exp(int, array[,] matrix) => array[,] matrix
log_sum_exp(real, array[,] matrix) => array[,] matrix
log_sum_exp(array[,] matrix, int) => array[,] matrix
log_sum_exp(array[,] matrix, real) => array[,] matrix
log_sum_exp(array[,] matrix, array[,] matrix) => array[,] matrix
log_sum_exp(int, array[,,] int) => array[,,] real
log_sum_exp(int, array[,,] real) => array[,,] real
log_sum_exp(real, array[,,] int) => array[,,] real
log_sum_exp(real, array[,,] real) => array[,,] real
log_sum_exp(array[,,] int, int) => array[,,] real
log_sum_exp(array[,,] int, real) => array[,,] real
log_sum_exp(array[,,] int, array[,,] int) => array[,,] real
log_sum_exp(array[,,] real, int) => array[,,] real
log_sum_exp(array[,,] real, real) => array[,,] real
log_sum_exp(array[,,] real, array[,,] real) => array[,,] real
log_sum_exp(int, array[,,] vector) => array[,,] vector
log_sum_exp(real, array[,,] vector) => array[,,] vector
log_sum_exp(array[,,] vector, int) => array[,,] vector
log_sum_exp(array[,,] vector, real) => array[,,] vector
log_sum_exp(array[,,] vector, array[,,] vector) => array[,,] vector
log_sum_exp(int, array[,,] row_vector) => array[,,] row_vector
log_sum_exp(real, array[,,] row_vector) => array[,,] row_vector
log_sum_exp(array[,,] row_vector, int) => array[,,] row_vector
log_sum_exp(array[,,] row_vector, real) => array[,,] row_vector
log_sum_exp(array[,,] row_vector, array[,,] row_vector) => array[,,] row_vector
log_sum_exp(int, array[,,] matrix) => array[,,] matrix
log_sum_exp(real, array[,,] matrix) => array[,,] matrix
log_sum_exp(array[,,] matrix, int) => array[,,] matrix
log_sum_exp(array[,,] matrix, real) => array[,,] matrix
log_sum_exp(array[,,] matrix, array[,,] matrix) => array[,,] matrix
log_sum_exp(int, array[,,,] int) => array[,,,] real
log_sum_exp(int, array[,,,] real) => array[,,,] real
log_sum_exp(real, array[,,,] int) => array[,,,] real
log_sum_exp(real, array[,,,] real) => array[,,,] real
log_sum_exp(array[,,,] int, int) => array[,,,] real
log_sum_exp(array[,,,] int, real) => array[,,,] real
log_sum_exp(array[,,,] int, array[,,,] int) => array[,,,] real
log_sum_exp(array[,,,] real, int) => array[,,,] real
log_sum_exp(array[,,,] real, real) => array[,,,] real
log_sum_exp(array[,,,] real, array[,,,] real) => array[,,,] real
log_sum_exp(int, array[,,,] vector) => array[,,,] vector
log_sum_exp(real, array[,,,] vector) => array[,,,] vector
log_sum_exp(array[,,,] vector, int) => array[,,,] vector
log_sum_exp(array[,,,] vector, real) => array[,,,] vector
log_sum_exp(array[,,,] vector, array[,,,] vector) => array[,,,] vector
log_sum_exp(int, array[,,,] row_vector) => array[,,,] row_vector
log_sum_exp(real, array[,,,] row_vector) => array[,,,] row_vector
log_sum_exp(array[,,,] row_vector, int) => array[,,,] row_vector
log_sum_exp(array[,,,] row_vector, real) => array[,,,] row_vector
log_sum_exp(array[,,,] row_vector, array[,,,] row_vector) => array[,,,] row_vector
log_sum_exp(int, array[,,,] matrix) => array[,,,] matrix
log_sum_exp(real, array[,,,] matrix) => array[,,,] matrix
log_sum_exp(array[,,,] matrix, int) => array[,,,] matrix
log_sum_exp(array[,,,] matrix, real) => array[,,,] matrix
log_sum_exp(array[,,,] matrix, array[,,,] matrix) => array[,,,] matrix
log_sum_exp(int, array[,,,,] int) => array[,,,,] real
log_sum_exp(int, array[,,,,] real) => array[,,,,] real
log_sum_exp(real, array[,,,,] int) => array[,,,,] real
log_sum_exp(real, array[,,,,] real) => array[,,,,] real
log_sum_exp(array[,,,,] int, int) => array[,,,,] real
log_sum_exp(array[,,,,] int, real) => array[,,,,] real
log_sum_exp(array[,,,,] int, array[,,,,] int) => array[,,,,] real
log_sum_exp(array[,,,,] real, int) => array[,,,,] real
log_sum_exp(array[,,,,] real, real) => array[,,,,] real
log_sum_exp(array[,,,,] real, array[,,,,] real) => array[,,,,] real
log_sum_exp(int, array[,,,,] vector) => array[,,,,] vector
log_sum_exp(real, array[,,,,] vector) => array[,,,,] vector
log_sum_exp(array[,,,,] vector, int) => array[,,,,] vector
log_sum_exp(array[,,,,] vector, real) => array[,,,,] vector
log_sum_exp(array[,,,,] vector, array[,,,,] vector) => array[,,,,] vector
log_sum_exp(int, array[,,,,] row_vector) => array[,,,,] row_vector
log_sum_exp(real, array[,,,,] row_vector) => array[,,,,] row_vector
log_sum_exp(array[,,,,] row_vector, int) => array[,,,,] row_vector
log_sum_exp(array[,,,,] row_vector, real) => array[,,,,] row_vector
log_sum_exp(array[,,,,] row_vector, array[,,,,] row_vector) => array[,,,,] row_vector
log_sum_exp(int, array[,,,,] matrix) => array[,,,,] matrix
log_sum_exp(real, array[,,,,] matrix) => array[,,,,] matrix
log_sum_exp(array[,,,,] matrix, int) => array[,,,,] matrix
log_sum_exp(array[,,,,] matrix, real) => array[,,,,] matrix
log_sum_exp(array[,,,,] matrix, array[,,,,] matrix) => array[,,,,] matrix
log_sum_exp(int, array[,,,,,] int) => array[,,,,,] real
log_sum_exp(int, array[,,,,,] real) => array[,,,,,] real
log_sum_exp(real, array[,,,,,] int) => array[,,,,,] real
log_sum_exp(real, array[,,,,,] real) => array[,,,,,] real
log_sum_exp(array[,,,,,] int, int) => array[,,,,,] real
log_sum_exp(array[,,,,,] int, real) => array[,,,,,] real
log_sum_exp(array[,,,,,] int, array[,,,,,] int) => array[,,,,,] real
log_sum_exp(array[,,,,,] real, int) => array[,,,,,] real
log_sum_exp(array[,,,,,] real, real) => array[,,,,,] real
log_sum_exp(array[,,,,,] real, array[,,,,,] real) => array[,,,,,] real
log_sum_exp(int, array[,,,,,] vector) => array[,,,,,] vector
log_sum_exp(real, array[,,,,,] vector) => array[,,,,,] vector
log_sum_exp(array[,,,,,] vector, int) => array[,,,,,] vector
log_sum_exp(array[,,,,,] vector, real) => array[,,,,,] vector
log_sum_exp(array[,,,,,] vector, array[,,,,,] vector) => array[,,,,,] vector
log_sum_exp(int, array[,,,,,] row_vector) => array[,,,,,] row_vector
log_sum_exp(real, array[,,,,,] row_vector) => array[,,,,,] row_vector
log_sum_exp(array[,,,,,] row_vector, int) => array[,,,,,] row_vector
log_sum_exp(array[,,,,,] row_vector, real) => array[,,,,,] row_vector
log_sum_exp(array[,,,,,] row_vector, array[,,,,,] row_vector) => array[,,,,,] row_vector
log_sum_exp(int, array[,,,,,] matrix) => array[,,,,,] matrix
log_sum_exp(real, array[,,,,,] matrix) => array[,,,,,] matrix
log_sum_exp(array[,,,,,] matrix, int) => array[,,,,,] matrix
log_sum_exp(array[,,,,,] matrix, real) => array[,,,,,] matrix
log_sum_exp(array[,,,,,] matrix, array[,,,,,] matrix) => array[,,,,,] matrix
log_sum_exp(int, array[,,,,,,] int) => array[,,,,,,] real
log_sum_exp(int, array[,,,,,,] real) => array[,,,,,,] real
log_sum_exp(real, array[,,,,,,] int) => array[,,,,,,] real
log_sum_exp(real, array[,,,,,,] real) => array[,,,,,,] real
log_sum_exp(array[,,,,,,] int, int) => array[,,,,,,] real
log_sum_exp(array[,,,,,,] int, real) => array[,,,,,,] real
log_sum_exp(array[,,,,,,] int, array[,,,,,,] int) => array[,,,,,,] real
log_sum_exp(array[,,,,,,] real, int) => array[,,,,,,] real
log_sum_exp(array[,,,,,,] real, real) => array[,,,,,,] real
log_sum_exp(array[,,,,,,] real, array[,,,,,,] real) => array[,,,,,,] real
log_sum_exp(int, array[,,,,,,] vector) => array[,,,,,,] vector
log_sum_exp(real, array[,,,,,,] vector) => array[,,,,,,] vector
log_sum_exp(array[,,,,,,] vector, int) => array[,,,,,,] vector
log_sum_exp(array[,,,,,,] vector, real) => array[,,,,,,] vector
log_sum_exp(array[,,,,,,] vector, array[,,,,,,] vector) => array[,,,,,,] vector
log_sum_exp(int, array[,,,,,,] row_vector) => array[,,,,,,] row_vector
log_sum_exp(real, array[,,,,,,] row_vector) => array[,,,,,,] row_vector
log_sum_exp(array[,,,,,,] row_vector, int) => array[,,,,,,] row_vector
log_sum_exp(array[,,,,,,] row_vector, real) => array[,,,,,,] row_vector
log_sum_exp(array[,,,,,,] row_vector, array[,,,,,,] row_vector) => array[,,,,,,] row_vector
log_sum_exp(int, array[,,,,,,] matrix) => array[,,,,,,] matrix
log_sum_exp(real, array[,,,,,,] matrix) => array[,,,,,,] matrix
log_sum_exp(array[,,,,,,] matrix, int) => array[,,,,,,] matrix
log_sum_exp(array[,,,,,,] matrix, real) => array[,,,,,,] matrix
log_sum_exp(array[,,,,,,] matrix, array[,,,,,,] matrix) => array[,,,,,,] matrix
log_sum_exp(int, array[,,,,,,,] int) => array[,,,,,,,] real
log_sum_exp(int, array[,,,,,,,] real) => array[,,,,,,,] real
log_sum_exp(real, array[,,,,,,,] int) => array[,,,,,,,] real
log_sum_exp(real, array[,,,,,,,] real) => array[,,,,,,,] real
log_sum_exp(array[,,,,,,,] int, int) => array[,,,,,,,] real
log_sum_exp(array[,,,,,,,] int, real) => array[,,,,,,,] real
log_sum_exp(array[,,,,,,,] int, array[,,,,,,,] int) => array[,,,,,,,] real
log_sum_exp(array[,,,,,,,] real, int) => array[,,,,,,,] real
log_sum_exp(array[,,,,,,,] real, real) => array[,,,,,,,] real
log_sum_exp(array[,,,,,,,] real, array[,,,,,,,] real) => array[,,,,,,,] real
Hi @andrjohns and @spinkney
I tried using log_sum_exp(tuple(vector, vector, vector)) => vector in my Stan code and I got an error stating the tuples are not supported in log_sum_exp.
Here is the error message
Compiling Stan program...
Semantic error in '/var/folders/s6/8h8lsjp50dn_cn79d_b388jh0000gn/T/RtmpjMYmxC/model-1594842f2d57.stan', line 30, column 16 to column 32:
-------------------------------------------------
28: abc.3 = mixture[,3];
29:
30: return [sum(log_sum_exp(abc))]';
^
31:
32: // for (i in 2:N_gen) mixture_lsp = log_sum_exp(mixture_lsp, mixture[,i]);
-------------------------------------------------
Ill-typed arguments supplied to function 'log_sum_exp':
(tuple(vector, vector, vector))
Available signatures:
(vector) => real
The first argument must be vector but got tuple(vector, vector, vector)
(row_vector) => real
The first argument must be row_vector but got tuple(vector, vector, vector)
(matrix) => real
The first argument must be matrix but got tuple(vector, vector, vector)
(array[] real) => real
The first argument must be
array[] real
but got
tuple(vector, vector, vector)
(int, int) => real
Expected 2 arguments but found 1 arguments.
(Additional signatures omitted)
make: *** [/var/folders/s6/8h8lsjp50dn_cn79d_b388jh0000gn/T/RtmpjMYmxC/model-1594842f2d57.hpp] Error 1
Error: An error occured during compilation! See the message above for more information.
Am I using log_sum_exp incorrectly