My new book "Practical AI with Google: A Solo Knowledge Worker's Guide to Gemini, AI Studio, and LLM APIs"
You can read my new book online at Practical AI with Google: A Solo Knowledge Worker's Guide to Gemini, AI Studio, and LLM APIs
Most books I have written are deeply technical - this is not one of them. Here, I attempt to make state of the art AI approachable and usable to a wider and not necessarily technical audience.
A Decision Surface
My previous book Ollama in Action: Building Safe, Private AI with LLMs, Function Calling and Agents concerned running local LLMs using Ollama - an idea I like for security and privacy (and fun!) reasons. Recent reasoning models like qwen3:30b, gemma3:27b-it-qat, and qwen3:30b can be run very well on a high end home computer, but they are very slow compared to using APIs from Google, OpenAI, etc. and that slows down my development process.
For a few years I wrote my own utilities using LLMs (both local and commercial APIs) to get stuff done, now I do much less coding, using instead products built around Gemini (and less often ChatGPT) because the product offerings do mostly what I want with no custom coding on my part.
I have been retired for a two years but I still spend 3 to 4 hours a day performing what I call “personal research” so my developer use cases are probably different than most people reading this. Something we all share however is the experience of living through exponential growth of AI capabilities.
What I can run on my home system (a Mac mini M2Pro with 32G) seems to be lagging in the capabilities of commercial APIs and end user AI products by about 6 months. For now I play in both worlds: state of the art commercial AI and what I can run locally.