Self-localization is important for mobile robots in order to move accurately, and many works use an omnidirectional camera for self-localization. However, it is difficult to realize fast and accurate self-localization by using only one omnidirectional camera without any calibration. For its realization, we use "tracked scale and rotation invariant feature points" that are regarded as landmarks. These landmarks can be tracked and do not change for a "long" time. In a landmark selection phase, robots detect the feature points by using both a fast tracking method and a slow "Speed Up Robust Features (SURF)" method. After detection, robots select landmarks from among detected feature points by using Support Vector Machine (SVM) trained by feature vectors based on observation positions. In a self-localization phase, robots detect landmarks while switching detection methods dynamically based on a tracking error criterion that is calculated easily even in the uncalibrated omnidirectional image. We performed experiments in an approximately 10 [m] x 10 [m] mock supermarket by using a navigation robot ApriTau™ that had an omnidirectional camera on its top. The results showed that ApriTau™ could localize 2.9 times faster and 4.2 times more accurately by using the developed method than by using only the SURF method. The results also showed that ApriTau™ could arrive at a goal within a 3 [cm] error from various initial positions at the mock supermarket.