Federated learning (FL) is an emerging technique used to collaboratively train a machine-learning model using the data and computation resources of mobile devices without exposing private or sensitive user data. Appropriate incentive mechanisms that motivate the data and mobile-device owner to participate in FL is key to building a sustainable platform. However, it is difficult to evaluate the contribution levels of participants to determine appropriate rewards without large computation and communication overhead. This paper proposes a computation- A nd communication-efficient method of estimating participants contribution levels. The proposed method requires a single FL training process, which significantly reduces overhead. Performance evaluations are done using the MNIST dataset, showing that the proposed method estimates participant contributions accurately with 46-49% less computation overhead and no communication overhead, as compared to a naive estimation method.