The interesting problem is not merely asking an LLM questions inside a notes app. It is creating a loop in which useful conversational output gets compressed into stable notes and those notes then sharpen the next round of interaction.