Your ML model cache volume is getting blown up during restart and the model is being re-downloaded during the first search post-restart. Either set it to a path somewhere on your storage, or ensure you’re not blowing up the dynamic volume upon restart.
In my case I changed this:
immich-machine-learning:
...
volumes:
- model-cache:/cache
To that:
immich-machine-learning:
...
volumes:
- ./cache:/cache
I no longer have to wait uncomfortably long when I’m trying to show off Smart Search to a friend, or just need a meme pronto.
That’ll be all.
Did you run the Smart Search job?
Running now.
Let me know how inference goes. I might recommend that to a friend with a similar CPU.
I decided on the ViT-B-16-SigLIP2__webli model, so switched to that last night. I also needed to update my server to the latest version of Immich, so a new smart search job was run late last night.
Out of 140,000+ photos/videos, it’s down to 104,000 and I have it set to 6 concurrent tasks.
I don’t mind it processing for 24h. I believe when I first set immich up, the smart search took many days. I’m still able to use the app and website to navigate and search without any delays.
Let me know how the search performs once it’s done. Speed of search, subjective quality, etc.