I use Codex CLI and Claude Code a lot during the day.
Not through an API. Not through a custom integration.
Mostly just the normal local CLIs, logged in with my subscription accounts, running inside whatever project I am working on.
That workflow is simple enough, but there was one small thing I kept thinking about.
Some of these tools meter usage in rolling time windows. In practice, the first message starts the window. If I only remember to open the tool later in the day, I may still have the same subscription, but the timing is not as useful as it could be.
So I built a tiny helper for myself: agent-poke.
The repo is here: github.com/semiherdogan/agent-poke.
It does not try to be smart.
It just sends a small message to the agents at fixed times.
Hey!
That is the whole idea.
The problem I wanted to solve
Claude is most useful to me during the workday, roughly between 09:00 and 19:00.
Codex is useful almost all the time.
At first, I thought about making the scheduler dynamic. It could track the last run, calculate the next allowed time, handle per-agent windows, and keep its own state.
That was too much for this problem.
What I actually needed was a static schedule that lines up with the day well enough.
The current schedule is:
0 6 * * * /app/scripts/run-checkin.sh
6 11 * * * /app/scripts/run-checkin.sh
12 16 * * * /app/scripts/run-checkin.sh
18 21 * * * /app/scripts/run-checkin.sh
In plain English:
06:00starts the first window before the workday11:06is five hours plus six minutes later16:12carries the window through the end of the workday21:18opens an evening window with a little more buffer
The six extra minutes are intentional.
I do not need to hit the boundary exactly. I only need to avoid firing slightly too early.
The container timezone is configured through TZ in docker-compose.yml. For me that is:
TZ: Europe/Istanbul
Why Docker
I want this to run on a server, not on my laptop.
That made Docker Compose the simplest deployment target. The container has the two CLIs installed, a scheduler, and a small runner script.
The shape is deliberately boring:
agent-poke/
Dockerfile
docker-compose.yml
config/
schedule.cron
scripts/
login-codex.sh
login-claude.sh
run-checkin.sh
logs/
workspace/
There is no database. No web UI. No queue. No service framework.
It is just enough container around a few CLI commands.
Login is still manual
This part was important to me.
I did not want to bake credentials into an image or pretend subscription login is the same thing as an API key.
The user logs in once, using the official CLI flow.
For Codex:
docker compose run --rm agent-poke login-codex
That runs:
codex login --device-auth
For Claude:
docker compose run --rm agent-poke login-claude
That runs:
claude auth login --claudeai
Both CLIs save their login state under the container user’s home directory.
That directory is a Docker volume:
volumes:
- agent_home:/home/agent
So the image can be rebuilt, but the login state stays.
The one command to avoid is:
docker compose down -v
That removes the volume, which also removes the saved logins.
Interactive check-ins
My first thought was to avoid the interactive CLIs entirely.
For scheduled jobs, non-interactive commands are usually cleaner:
codex exec "Hey!"
claude -p "Hey!"
But for this specific tool, I decided against that.
The goal is to behave like normal CLI usage. I do not want Codex and Claude to use different paths, and I do not want Claude to accidentally fall into a separate programmatic usage bucket if its metering changes.
So the final version drives both CLIs the same way:
- open the interactive CLI with
Hey!as the initial prompt - wait for the response to settle
- exit the CLI
The runner starts the agents in parallel. That matters because I do not want Claude to start several minutes after Codex just because one response is slow.
Passing the message as the initial prompt also avoids a fragile timing problem: the scheduler does not have to wait for the TUI input box to become ready before typing.
That means expect is back in the image.
It is a little less elegant than a pure non-interactive command, but it matches the way I actually use the tools.
There is one operational consequence: after login, I run one manual check so both CLIs can ask and remember any workspace trust prompt.
docker compose run --rm agent-poke checkin
After that, the scheduled runs use the same interactive path.
The runner keeps recent logs only. The default is:
LOG_KEEP: 20
That is enough to inspect recent runs without letting a small helper slowly fill a disk.
Raw terminal output is disabled by default because TUI tools emit a lot of escape sequences. If I need to debug the actual screen output, I can turn it back on with:
RAW_AGENT_OUTPUT: 1
The normal log only records whether each agent started and completed successfully. I deliberately avoided parsing TUI output because small formatting changes in Codex or Claude could make that brittle.
The small deployment detail
The container runs as a non-root user.
That is usually the right default, but it means the host-mounted logs/ and workspace/ directories need to be writable by that user.
On this image, the agent user is 1001:1001, so the server setup needs:
mkdir -p logs workspace
chown -R 1001:1001 logs workspace
That fixed the only real deployment issue I hit.
Without it, the check-in script could start, but failed when it tried to create a log file.
What I like about it
This is one of those tools where the main feature is that there are not many features.
The final version is just:
- fixed schedule
- manual login
- persistent home volume
- interactive CLI check-ins
- plain log files
- Docker Compose deployment
That is enough for now.
If I ever need per-agent scheduling or smarter state, I can add it later.
But for this problem, the static version is easier to trust.