we would try to print the uids from the non-existing srec of unpaired
messages while preparing expiration.
this would happen only if a) MaxMessages was configured and b) new
messages appeared on the slave but we were not pushing, so it's a bit of
a corner case.
found by coverity.
this would happen in the absurd corner case that the response code is
properly terminated with a closing bracket, but the atom itself is an
unterminated double-quoted string.
NOT found by coverity.
if something managed to make the maildir .uidvalidity files big enough
(possible only by appending garbage or scrambling them alltogether), we
would overflow the read buffer by one when appending the terminating
null.
this is not expected to have any real-world impact.
found by coverity.
for some reason lost in history, the prime_deltas were actually wrong,
leading to using composite numbers.
the right sequence is available at http://oeis.org/A092131.
the trivial approach of having "home" and "root" stores produced ugly
results, and totally failed with the introduction of nested folder
handling.
instead, create a store per local directory, just as one would manually.
CCMAIL: 737708@bugs.debian.org
makes for leaner Channel sections.
note: the global delete and expunge variables exist so the command line
can override the config file despite the otherwise backwards behavior.
the BODY[] item in the FETCH response corresponds to what we requested,
and its presence doesn't imply that it actually contains anything useful
- new messages may appear in the mailbox in addition to those we stored
ourselves, and these will obviously have no TUID.
the global timezone variable is glibc-specific.
so use timegm() instead of mktime() for the conversion.
as that is specific to the BSDs and glibc, provide a fallback.
amends 62a6099.
by putting the message propagation last, d3f634702 uncovered a
long-standing problem: we might have closed the source store before all
messages were propagated from it.
msgs_copied() was not checked at all, and msgs_flags_set() was doing it
wrong (sync_close() was not checked).
instead of trying to fix/extend the msgs_flags_set() model (ref-counting
and cancelation checking in lower-level functions, and return values to
propagate the status), place the refs/derefs around higher-level scopes
and do the checking only there. this is effectively simpler, and does
away with some obscure macros.
as the named boxes are the same on both sides, they logically make
sense only when the channel is in that mode anyway, which is the case
when using patterns.
we would see the recent timestamp of the creation and conclude that
something is going on, so we'd wait. this is obviously nonsense.
as we know that a freshly created mailbox is empty, simply skip the
message scan alltogether.
as we now don't actually start propagating new messages until all TUIDs
have been generated, it's sufficient to sync just once. this makes it
a cheap operation, so we can do it at SYNC_NORMAL level already.
sneaky change on the side: the wording of the man page is changed from
"outside any section" to "before any section" to get global options.
this is not entirely true ... the previously existing options behave as
before, while the two newcomers actually affect subsequent channels.
i.e., move it back. whatever the original reason was, it's now gone.
this order is way more natural, which allows us to remove the osrecadd
and S_DONE hacks.
this helps enormously on the first sync of a 100k message box with a
limit of 1k messages. it also happens to make the syncing idempotent.
in a few conditionals we now explicitly test for max_messages being
enabled, not smaxxuid != 0, as after the initial fetch with no important
messages smaxxuid is zero, but we still have to skip over 99k messages
in the above case.
previous sequence:
examine & propagate new => examine old => propagate old
new sequence:
examine new => examine old => propagate new => propagate old
this alone does not buy us much ...
we can bump the internal variable whereever convenient, but we cannot
log it until we know that all messages were copied, as otherwise we
could miss some new messages after an interruption. with the new
approach, interruption would merely cause some additonal traffic.
less code duplication, more logical order of issued driver commands
(especially after the next commit), and the "side effect" of letting the
message expiration code see those deletions if they are asynchronous.
the delay optimized the corner case of previously important but now
expired messages on the slave disappearing, either through an external
expunge or after a journal replay. no point in pessimizing the common
case.
the removed code would only ever trigger if a) we were after a journal
replay or b) something external expunged the expired messages - both are
corner cases not worth the extra code.
however, this means that the syncing code further down now needs to take
care of these zombies.
in the end, the normal cleanup will take care of all expired entries,
new and old.
that is, don't count them towards the total only below the cut-off
point. making them extend the working set even though they are inside it
is counterintuitive.
while maildir has a clearly defined meaning of "recent" and for example
mutt handles it graciously, IMAP's definition is fubared to the point
that some servers (for example gmail) simply refuse to support it.
for symmetry reasons it is best to pretend that it doesn't exist at all.
it doesn't seem too useful anyway (the user can simply mark the messages
as read to allow pruning).
and last but not least, the man page of mbsync says nothing about
"recent", only "unread". unlike the isync man page, though.
even if we are not propagating new messages, the appearance of new
messages on the slave can lead to expiring older messages. for that, we
need to know their importance, and thus flags.
the alternative would be not doing an expiration run when not fetching
new messages, but that would mean more conditionals all over the place.
as the decision is somewhat arbitrary, just do the simpler thing.
the header is not space-critical, so use proper name-value pairs.
this has the additional advantage that subsequent format changes can be
done much easier.
otherwise we would propagate phantom deletions.
this affected only sync runs after an interruption while storing
messages, so it went (mostly?) unnoticed.
the warning suppression pragma within function scope is apparently a new
thing.
as i don't want to disable the check for the entire function (even if
this currently would make no difference), just use a wrapper function
to suppress the format string check.
amends 9c86ec344.
S_FIND was for the sync record status field. it has no business in the
sync vars status fields. its value coincided with ST_SELECTED, which
luckily only means that we always tried to match up TUIDs even if there
was nothing to do.
the need for TUID matching arises in two mostly independent
circumstances, so add two separate flags ST_FIND_{OLD,NEW}.
this would happen if we were trying to find newly pushed messages, but
none actually arrived.
as imap's ranges are not ordered, this would actually fetch one message.
this value is only ever used to find just pushed messages by TUID, so we
can simply use the UIDNEXT value from before we started pushing - and of
course, we need to record that in the journal. it makes no sense to log
the new value after completing a search, as there won't be a next search
before we push the next messages.
the purpose of this variable is to hold the UIDNEXT value from before
we started pushing new messages, i.e., the minimal uid we can expect
them to have.
the test suite actually relies on it. it would be possible to adjust it,
but there is not much reason to make paths relative to HOME (as we
support convenient tilde expansion). so use the least invasive approach,
which is simply the old behavior. adjust the documentation accordingly.
This reverts commit da5ce5d8f4.
always use getsockopt() to query the meaning of POLLERR, rather than
reporting "Unidentified socket error".
this is unlikely to have any effect when using select(), as that one
pretty much never signals exceptional conditions.
turns out that poll() may (and on linux does) signal POLLERR on
connection failure. this is unlike select(), which is specified to
signal write readiness in every case.
consequently, check whether we are connecting before checking for
POLLERR.
time_t may be long long. to keep the sprintf format strings simple, just
downcast - this is not going to be a problem for the next 30 years, and
until then long will be 64-bit everywhere anyway.
suggested 3.5 years ago by Antoine Reilles <tonio@NetBSD.org>.
leave all the hard work to OpenSSL. this has several consequences:
- certificate chain validation actually works instead of throwing
around error 20
- the interactive approval is gone. i don't expect it to be useful
anyway, as mbsync is mostly a batch tool
- the code is much shorter
we did not check a valid certificate's subject at all so far.
this is no problem if the certificate file contains only exactly the
wanted host's certificate - before revision 04fdf7d1 (dec 2000, < v0.4),
this was even enforced (more or less - if the peer cert had been
signed directly by a root cert, it would be accepted as well).
however, when the file contains root certificates (like the system-wide
certificate file typically does), any host with a valid certificate
could pretend to be the wanted host.
fdatasync() the journal after creating the pair record and recording
the TUID, but before the message propagation actually starts.
all other writes to the journal are not flushed, as they will at worst
cause some unnecessary network traffic without visible effect.
this fixes two possible failure scenarios:
- if the journal is committed but the mails are not, the missing files
would be erroneously interpreted as deletions which would be
propagated
- less seriously, if the mail files' meta data was committed but the
file contents were not, we would end up with empty files, which would
have to be re-fetched "behind mbsync's back" (just deleting the files
would not work - see above)
make sure that the new state is committed to disk before overwriting the
old version - by default meta data is committed first, so we may end up
with no valid state at all otherwise.