Check the completeness of original blocks during target files validation

The validate_target_files.py checks the 'incomplete' field of the range
in file_map. And range has already considered the shared blocks and
could be smaller in size than the original file range. Therefore, the
'incomplete' flag was set on the original range in common.py; and we
should switch to use the original range also during validation.

I also checked another flag usage in CanUseImgdiff(), and it has
explicitly rejected cases of shared blocks.

Bug: 124868891
Test: unit tests pass
Change-Id: I03959625d7b81fd83420db98f01d23f54064bcd2
This commit is contained in:
xunchang
2019-02-20 15:03:43 -08:00
parent 5fa7e2fa2c
commit c0f77ee489
2 changed files with 68 additions and 9 deletions

View File

@@ -84,11 +84,6 @@ def ValidateFileConsistency(input_zip, input_tmp, info_dict):
# bytes past the file length, which is expected to be padded with '\0's.
ranges = image.file_map[entry]
incomplete = ranges.extra.get('incomplete', False)
if incomplete:
logging.warning('Skipping %s that has incomplete block list', entry)
continue
# Use the original RangeSet if applicable, which includes the shared
# blocks. And this needs to happen before checking the monotonicity flag.
if ranges.extra.get('uses_shared_blocks'):
@@ -96,6 +91,11 @@ def ValidateFileConsistency(input_zip, input_tmp, info_dict):
else:
file_ranges = ranges
incomplete = file_ranges.extra.get('incomplete', False)
if incomplete:
logging.warning('Skipping %s that has incomplete block list', entry)
continue
# TODO(b/79951650): Handle files with non-monotonic ranges.
if not file_ranges.monotonic:
logging.warning(