Here's what Arq does:
Every 96 hours (by default) Arq "enforces your budget", which is actually several steps:
1. It runs a backup.
2. Then it checks to make sure every object referenced by your backups is still available in S3.
3. Then it drops the oldest backup versions until it gets under budget (or until there's only 1 backup version left, even if that's over budget).
When Arq finds a missing object in step 2, it drops that backup version and every backup version before it. This is pretty terrible behavior. I should have addressed it before, but it's really a case that should never happen. It should only happen if there's a bug in Arq (I'm pretty sure there isn't!), or if S3 drops an object (it's really not supposed to!).
The error message "deleted 1 head(s)…" indicates the last backup version (the "head commit") had a missing object, so Arq dropped that backup version and all the ones before it. The next time it does a backup, it will create a new backup version, but it won't re-upload the files that are already in S3.
I'm working on an update that replaces the terrible behavior with much better behavior. It will rewrite the backup metadata to show which files are missing, and keep the rest. It will show a warning icon next to all backup versions that have missing objects. Arq can't recreate the past, but the next time Arq backs up, it will of course upload missing files.
I'm going to put it out for beta testing as soon as I possibly can. It's code-complete, but I'm not done with my tests yet.
Again, I apologize for the terrible behavior. An improvement is coming very soon.