We advocate a model to effectively detect salient objects in various videos; the proposed framework [spatiotemporal saliency and coherency, (STSC)] consists of two modules, for capturing the spatiotemporal saliency and the temporal coherency information in the superpixel domain, respectively. We first extract the most straightforward gradient contrasts (such as the color gradient and motion gradient) as the low-level features to compute the high-level spatiotemporal gradient features, and the spatiotemporal saliency is obtained by computing the average weighted geodesic distance among these features. The temporal coherency, which is measured by the motion entropy, is then used to eliminate the false foreground superpixels that result from inaccurate optical flow and confusable appearance. Finally, the two discriminative video saliency indicators are combined to identify the salient regions. Extensive quantitative and qualitative experiments on four public datasets (FBMS, DAVIS, SegtrackV2, and ViSal dataset) demonstrate the superiority of the proposed method over the current state-of-the-art methods.