The two-step GMM estimators of Arellano and Bond (1991) and Blundell and Bond (1998) for dynamic panel data models have been widely used in empirical work; however, neither of them performs well in small samples with weak instruments.
The continuous-updating GMM estimator proposed by Hansen, Heaton, and Yaron (1996) is in principle able to reduce the small-sample bias, but it involves high-dimensional optimizations when the number of regressors is large.
), where each observation is a d-dimensional real vector, k-means clustering aims to partition the n observations into k (≤ n) sets S = so as to minimize the within-cluster sum of squares (WCSS) (i.e. Formally, the objective is to find: The most common algorithm uses an iterative refinement technique.
This results in a partitioning of the data space into Voronoi cells.The GMM method then minimizes a certain norm of the sample averages of the moment conditions.The GMM estimators are known to be consistent, asymptotically normal, and efficient in the class of all estimators that do not use any extra information aside from that contained in the moment conditions.The interpretation gives some insight into why there is less bias associated with this estimator.k-means clustering is a method of vector quantization, originally from signal processing, that is popular for cluster analysis in data mining.