Linear Approximate Dynamic Programming (LADP) and Incremental Approximate Dynamic Programming (IADP) are Reinforcement Learning methods that seek to contribute to the field of Adaptive Flight Control. This paper assesses their performance and convergence, as well as the impact of sensor noise on policy convergence, online system identification, performance and control surface deflection. After summarising their theory and derivation with full state (FS) and output feedback (OPFB), they are implemented on the linearised longitudinal F16 model. In order to establish an objective performance comparison, their hyper-parameters were tuned with an evolutionary algorithm: Particle Swarm Optimisation (PSO). Results show that LADP and IADP have the same performance in the presence of FS feedback, whereas LADP outperforms IADP when only OPFB is available. Output noise causes LADP based on OPFB to diverge. In the case of IADP based on OPFB, sensor noise improves the performance due to a better exploration of the solution space. The present research aims at bridging the gap between the discussed ADP algorithms and real world systems.