-
AI@home: Classifying images with Ollama – part six: More about locations – and even more prompt improvements
In part five, I started on injecting info about the place the picture was taken into my classifier. I did this by taking the embedded GPS coordinates from an image and translate this to something more meaningful. I had two ways to translate this: For the manual register, I also introcuded kinds, basically to decide…
-
AI@home: Classifying images with Ollama – part five: Places and more!
I am kind of still amazed how much information exists in mere 7 billion tokens in an ai model (my current preferred model, qwen2.5vl:7b). It is pretty good at understand what happens in an image, and it also is able to, all by itself, recognize places like the Corinth Canal, Eiffel Tower, the Kiomizu-Dera Temple…
-
AI@home: Classifying images with Ollama – part three: A command-line app
Submitting jobs from the command line with RAW API calls works, but isn’t exactly user friendly in the long run. Getting the results, you might also want to use it for something. I decided I wanted XMP sidecar files that my digikam can understand. So, let’s get started! Architecture decisions As much as my decision…
-
AI@home: Classifying images with Ollama – part two.
In my previous blog post, I created a basic API that would take a file as input, analyze it in ollama with a vision capable model and return the output. However, it had one major weakness: The HTTP call to the API would hang until the classifier job was finished. Especially classifying videos takes quite…
-
AI@home: Classifying images with Ollama – part one.
After having tried deepseek a bit, I quickly decided it was more fun to dig a bit deeper. Deepseek looks like a solid tool for good results with not too much hardware resources, as opposed to what I am exploring in this article, classifying an image with a general purpose LLM. Nevertheless, it’s a fun…
-
My self-hosted AI journey Part 2: Using Ollama as your coding assistant
One of the big use-cases for AI for developers today is coding-assistants. It basically serves as your assistant for useful suggestions, a sparring-partner, and occationally you can hand off larger tasks to it while you are taking a lunch break. Hiring human assistants means you can have them sign work contracts and confidentiality agreements –…
-
Playing with AIs at home – beginning the journey
After having upgraded my home server, I found myself with an abundance of both CPU power and memory, both of which are meant to be used. After having given my other down-scaled components the memory and CPU they truly need, I decided to see what my new hardware could be used for. One of the…
-
A hardware upgrade!
In short: In essence, I wanted to play with fun functionality instead of battling defects and too little resources. So, I went on a shopping hunt. And I found The Beelink GTI14 Ulta 9. I spec’ed it high, with 2*1TB m.2 SSD (I somewhat regret not going for lower (it was the max), and rather…
-
Kubernetes DR Part 4 – Addendum: How I solved the complete DR activation
My DR activation strategy is about injecting state into the ArgoCD applications. One state I am already injecting globally is the cluster I am on, via the in-cluster secret, and based on that I set the env variable to either prod or DR. This is done through the cluster generator. I basically set the cluster…
-
Kubernetes DR Part 3 – migrating the workload applications
In my previous blog post, I got as far as having an identical gitea in DR, with the same repositories that exists on-prem. They will, of course, not stay identical for very long without finding a way to keep them in sync. Before starting to migrate applications to applicationsets and create them in DR, I…