TTFT
Test Time Fine-Tuning
Test Time Fine-Tuning
Process of adapting a pre-trained model using new data during the testing phase to improve its performance on specific tasks.
TTFT involves updating a model's parameters during the test phase based on new input data, which is not seen during training. This technique allows the model to adapt to distributional shifts and improve performance in dynamic environments where data characteristics can change over time. It leverages a few optimization steps on the test data to fine-tune the model, enhancing its generalization ability and robustness. TTFT is particularly useful in scenarios like personalized recommendations, medical diagnostics, or any real-time application where the input data might differ significantly from the training distribution.
The concept of adapting models during inference has been around since the early 2000s, but it gained popularity in the late 2010s with the rise of deep learning and the need for models to handle real-world, non-stationary data.
Significant contributions to TTFT have been made by researchers from various institutions, including Stanford University, where advances in domain adaptation and transfer learning have been foundational. Notable figures include Andrew Ng and his team, who have worked extensively on improving model robustness and adaptability in dynamic environments.