Spectral clustering methods allow to partition a dataset into clusters by mapping the input datapoints into the space spanned by the eigenvectors of the Laplacian matrix. In this article we make use of the incomplete Cholesky decomposition (ICD) to construct an approximation of the graph Laplacian and reduce the size of the related eigenvalue problem from N to m, with m ≪ N. In particular, we introduce a new stopping criterion based on normalized mutual information between consecutive partitions, which terminates the ICD when the change in the cluster assignments is below a given threshold. Compared with existing ICD-based spectral clustering approaches, the proposed method allows to reduce the number m of selected pivots (i.e. to obtain a sparser model) and at the same time to maintain high clustering quality. The method scales linearly with respect to the number of input datapoints N and has low memory requirements because only matrices of size N × m and m × m are calculated (in contrast to standard spectral clustering where the construction of the full N × N similarity matrix is needed). Furthermore, we show that the number of clusters can be reliably selected based on the gap heuristics computed using just a small matrix R of size m × m instead of the entire graph Laplacian. The effectiveness of the proposed algorithm is tested on several datasets.