In this study, we propose a collaborative autonomous navigation stack between two sensors: a laser rangefinder (LRF) and a 2D light detection and ranging (LiDAR). The difference between LRF and LiDAR is that LiDAR uses the same laser technology but rotates around its axis and offers a 360° all-round visibility. The system can navigate in a more complex environment such as a convenience store or a supermarket. Because of the collaboration between inclined LRFs and 2D LiDARs, a mobile robot can avoid objects below the scanning sight of a 2D LiDAR, for example, a small box, a short display rack, shelf legs, and the lower body of a shopping cart. Thus, the proposed system leverages the navigation stack capabilities of using multiple observation sources to increase the accuracy of both navigation and obstacle avoidance. Our method aims to solve the issue of obstacle avoidance of objects below the scanning sight of a 2D LiDAR and increase the accuracy of detection during navigation using an inclined LRF. The proposed system is based on a robot operating system. Experiments were conducted to demonstrate outputs in terms of increasing navigation and detection accuracy by being able to effectively avoid objects below a 2D LiDAR.