i.e., move it back. whatever the original reason was, it's now gone.
this order is way more natural, which allows us to remove the osrecadd
and S_DONE hacks.
this helps enormously on the first sync of a 100k message box with a
limit of 1k messages. it also happens to make the syncing idempotent.
in a few conditionals we now explicitly test for max_messages being
enabled, not smaxxuid != 0, as after the initial fetch with no important
messages smaxxuid is zero, but we still have to skip over 99k messages
in the above case.
previous sequence:
examine & propagate new => examine old => propagate old
new sequence:
examine new => examine old => propagate new => propagate old
this alone does not buy us much ...
we can bump the internal variable whereever convenient, but we cannot
log it until we know that all messages were copied, as otherwise we
could miss some new messages after an interruption. with the new
approach, interruption would merely cause some additonal traffic.
less code duplication, more logical order of issued driver commands
(especially after the next commit), and the "side effect" of letting the
message expiration code see those deletions if they are asynchronous.
the delay optimized the corner case of previously important but now
expired messages on the slave disappearing, either through an external
expunge or after a journal replay. no point in pessimizing the common
case.
the removed code would only ever trigger if a) we were after a journal
replay or b) something external expunged the expired messages - both are
corner cases not worth the extra code.
however, this means that the syncing code further down now needs to take
care of these zombies.
in the end, the normal cleanup will take care of all expired entries,
new and old.
that is, don't count them towards the total only below the cut-off
point. making them extend the working set even though they are inside it
is counterintuitive.
while maildir has a clearly defined meaning of "recent" and for example
mutt handles it graciously, IMAP's definition is fubared to the point
that some servers (for example gmail) simply refuse to support it.
for symmetry reasons it is best to pretend that it doesn't exist at all.
it doesn't seem too useful anyway (the user can simply mark the messages
as read to allow pruning).
and last but not least, the man page of mbsync says nothing about
"recent", only "unread". unlike the isync man page, though.
even if we are not propagating new messages, the appearance of new
messages on the slave can lead to expiring older messages. for that, we
need to know their importance, and thus flags.
the alternative would be not doing an expiration run when not fetching
new messages, but that would mean more conditionals all over the place.
as the decision is somewhat arbitrary, just do the simpler thing.
the header is not space-critical, so use proper name-value pairs.
this has the additional advantage that subsequent format changes can be
done much easier.
otherwise we would propagate phantom deletions.
this affected only sync runs after an interruption while storing
messages, so it went (mostly?) unnoticed.
amends 9c86ec344.
S_FIND was for the sync record status field. it has no business in the
sync vars status fields. its value coincided with ST_SELECTED, which
luckily only means that we always tried to match up TUIDs even if there
was nothing to do.
the need for TUID matching arises in two mostly independent
circumstances, so add two separate flags ST_FIND_{OLD,NEW}.
this value is only ever used to find just pushed messages by TUID, so we
can simply use the UIDNEXT value from before we started pushing - and of
course, we need to record that in the journal. it makes no sense to log
the new value after completing a search, as there won't be a next search
before we push the next messages.
the purpose of this variable is to hold the UIDNEXT value from before
we started pushing new messages, i.e., the minimal uid we can expect
them to have.
fdatasync() the journal after creating the pair record and recording
the TUID, but before the message propagation actually starts.
all other writes to the journal are not flushed, as they will at worst
cause some unnecessary network traffic without visible effect.
make sure that the new state is committed to disk before overwriting the
old version - by default meta data is committed first, so we may end up
with no valid state at all otherwise.
this removes the pathological O(<number of sync records> * <number of
new messages>) case at the cost of being a bit more cpu-intensive (but
O(<number of all messages>)) for old messages.
when we find that the store is incompatible with in-store sync state,
we want to fail the whole channel. however, we must not claim that the
store died, otherwise it won't be disposed of properly.
instead of SEARCHing every single message (which is slow and happens to
be unreliabe with M$ Exchange 2010), just FETCH the new messages from
the mailbox - the ones we just appended will be amongst them.