lvm-toolkit
lvm-toolkit copied to clipboard
UT Austin Machine Learning Group Latent Variable Modeling Toolkit
UTML Latent Variable Modeling Toolkit
Joseph Reisinger [email protected]
ver. 0.00000...01 (alarmingly alpha)
OVERVIEW
Implements a bunch of multinomial-dirichlet latent variable models in C++, including:
- Dirichlet Process Mixture Model
- Latent Dirichlet Allocation
- Nested Chinese Restaurant Process (hierarchical LDA)
- fixed-depth multinomial
- arbitrary depth w/ GEM sampler
- Labeled LDA / Fixed-Structure hLDA
- Tiered Clustering
- Cross-Cutting Categorization
- Soft Cross-Cutting Categorization
- (EXPERIMENTAL) Clustered LDA (latent word model)
I'm releasing this not because we need another Topic Modeling package, but because it includes cross-cutting categorization and tiered clustering, neither of which have packages I'm aware of. Also just in case people want to try and duplicate my research.
If you're looking to do straight-up topic modeling, there are several far more mature, faster, excellent, better-looking packages:
- MALLET (java): http://mallet.cs.umass.edu/
- R LDA (R): http://cran.r-project.org/web/packages/lda/
- Stanford Topic Modeling Toolkit (scala): http://nlp.stanford.edu/software/tmt/tmt-0.3/
- LDA-C (C): http://www.cs.princeton.edu/~blei/lda-c/index.html
Also if you want to just do vanilla nCRP/hLDA, Dave's code is probably more reliable:
- hLDA: http://www.cs.princeton.edu/~blei/downloads/hlda-c.tgz
A lot of the common math/stats routines were ripped off from the samplib and stats source files included in Hal's Hierarchical Bayes Compiler:
- hbc: http://www.umiacs.umd.edu/~hal/HBC/
COMPILING
You're going to need several packages, freely available:
- google-logging (glog): http://code.google.com/p/google-glog/
- google-gflags (gflags): http://code.google.com/p/google-gflags/
- google-sparsehash: http://code.google.com/p/google-sparsehash/
- Fast Mersenne Twister (DSFMT): http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/
- (INCLUDED) strutil.h from google-protobuf: http://code.google.com/p/protobuf/
Build all those and install in the normal way.
Then DONT just type 'make' ; first, go look at the Makefile and check all the paths are right. They're not, unless you're me. Fix those. Then just type 'make'
RUNNING
To see stdout from the various models you need GLOG_logtostderr=1 set in your environment. Here are some example invocations:
(ncrp w/ depth 5 and depth-dependent eta scaling)
GLOG_logtostderr=1 ./sampleMultNCRP \
--ncrp_datafile=data.txt \
--ncrp_depth=5 \
--ncrp_eta_depth_scale=0.5
(lda w/ 50 topics)
GLOG_logtostderr=1 ./sampleMultNCRP \
--ncrp_datafile=data.txt \
--ncrp_depth=50 \
--ncrp_max_branches=1
(soft cross-cat)
GLOG_logtostderr=1 ./sampleSoftCrossCatMixtureModel \
--mm_alpha=1.0 \
--eta=1.0 \
--mm_datafile=data.txt \
--M=2 \
--implementation=marginal
All the scripts can be called with --help to list the various flags.
The datafile format for all the models is:
[NAME] [term_1]:[count] [term_2]:[count] ... [term_N]:[count]
Where NAME is the (not necessarily unique) name of the document. All separating whitespace should be \t not ' ', as ' ' can be included in term names.
THE FUTURE
I have a ton of python scripts for dealing with the various samples generated by these models. Eventually I'll include those as well. Feel free to ask for them; more user demand = faster turn around.
LICENSE
Copyright 2010 Joseph Reisinger
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.