cgpt: Enable images to be built with auto detected block sizes.
Do not assume all images use 512 block sizes. Use new common
blocksize function to auto detect block size. Removed 'blocks'
label from partition and let them default to a minimum size
and removed 'blocks' field from default JSON file.
BUG=chromium:832160
BRANCH=none
CQ-DEPEND=CL:*657734
TEST=manual
./build_image --board=bob
installed on device and booted
Output of 'cgpt show /dev/mmcblk0' with my changes:
start size part contents
0 1 PMBR (Boot GUID: F3E1F680-95B0-054E-A82B-BD3F017DE459)
1 1 Pri GPT header
2 32 Pri GPT table
8704000 52367312 1 Label: "STATE"
Type: Linux data
UUID: 3DFED94B-D9AE-B744-BD9F-BF0834938B33
20480 32768 2 Label: "KERN-A"
Type: ChromeOS kernel
UUID: 0165168D-FA3F-E54F-8707-0AAEEF4D41EF
Attr: priority=1 tries=0 successful=1
4509696 4194304 3 Label: "ROOT-A"
Type: ChromeOS rootfs
UUID: C74D9CF6-152E-6B4E-964B-D4531A1268D1
53248 32768 4 Label: "KERN-B"
Type: ChromeOS kernel
UUID: A0448FB5-D631-6C41-A5DA-18D8242735BC
Attr: priority=0 tries=15 successful=0
315392 4194304 5 Label: "ROOT-B"
Type: ChromeOS rootfs
UUID: DA8A4C29-EBCC-994C-ACD3-93C4F58A77DF
16448 1 6 Label: "KERN-C"
Type: ChromeOS kernel
UUID: 36BA8C58-B600-3A4B-8B55-C32EB0164BBC
Attr: priority=0 tries=15 successful=0
16449 1 7 Label: "ROOT-C"
Type: ChromeOS rootfs
UUID: 56CE49CE-1DBD-F146-89E4-25B06C8B62C4
86016 32768 8 Label: "OEM"
Type: Linux data
UUID: FA935D09-E726-E042-9B10-AA4C03AFB2EE
16450 1 9 Label: "reserved"
Type: ChromeOS reserved
UUID: 2B982238-8E22-FA49-86C7-38734C5B33DE
16451 1 10 Label: "reserved"
Type: ChromeOS reserved
UUID: 3B81F9CA-1BBA-164D-BEE9-0BCB1FA26F19
64 16384 11 Label: "RWFW"
Type: ChromeOS firmware
UUID: E6944883-005E-AE4A-B3A8-B55AFCE94C42
249856 65536 12 Label: "EFI-SYSTEM"
Type: EFI System Partition
UUID: F3E1F680-95B0-054E-A82B-BD3F017DE459
Attr: legacy_boot=1
61071327 32 Sec GPT table
61071359 1 Sec GPT header
Output without changes:
start size part contents
0 1 PMBR (Boot GUID: 9DA82550-D83E-6A4C-BA78-054E98F12006)
1 1 Pri GPT header
2 32 Pri GPT table
8704000 52363264 1 Label: "STATE"
Type: Linux data
UUID: 3F35D00C-8097-D54C-B480-CF48872FA474
20480 32768 2 Label: "KERN-A"
Type: ChromeOS kernel
UUID: 920D89C0-F1A6-924D-9EE9-A30FD8FEC247
Attr: priority=1 tries=6 successful=1
4509696 4194304 3 Label: "ROOT-A"
Type: ChromeOS rootfs
UUID: BA717C2B-83EC-0944-B24D-CD3ED88BC9C4
53248 32768 4 Label: "KERN-B"
Type: ChromeOS kernel
UUID: 3DD0DBA3-0525-0B4E-9327-9FCDBAAB6580
Attr: priority=0 tries=15 successful=0
315392 4194304 5 Label: "ROOT-B"
Type: ChromeOS rootfs
UUID: F88EBFC4-A004-B348-ACFB-539E3AA93575
16448 1 6 Label: "KERN-C"
Type: ChromeOS kernel
UUID: 82A9EF93-5DAA-5F45-9FFE-8A3A5C8DAE7F
Attr: priority=0 tries=15 successful=0
16449 1 7 Label: "ROOT-C"
Type: ChromeOS rootfs
UUID: D92CE030-3113-AF4E-AFD8-5E7AA302B35A
86016 32768 8 Label: "OEM"
Type: Linux data
UUID: 27D59369-75CA-3946-B9A5-B6F091E4908A
16450 1 9 Label: "reserved"
Type: ChromeOS reserved
UUID: E234C4E2-3F61-7147-B281-06B90047CE75
16451 1 10 Label: "reserved"
Type: ChromeOS reserved
UUID: F3961050-D6A0-3A43-BC2D-0E1DB18EBF3D
64 16384 11 Label: "RWFW"
Type: ChromeOS firmware
UUID: BA21F542-9AD9-FD49-9CA9-7E7C472EDA78
249856 65536 12 Label: "EFI-SYSTEM"
Type: EFI System Partition
UUID: 9DA82550-D83E-6A4C-BA78-054E98F12006
Attr: legacy_boot=1
61071327 32 Sec GPT table
61071359 1 Sec GPT header
CQ-DEPEND=CL:1091293, CL:1091299, CL:1121781
Change-Id: I9494a3a5f6d277c61a369e32e47ef1a17f95e8ad
Reviewed-on: https://chromium-review.googlesource.com/1091308
Commit-Ready: Sam Hurst <shurst@google.com>
Tested-by: Sam Hurst <shurst@google.com>
Reviewed-by: Julius Werner <jwerner@chromium.org>
diff --git a/build_library/cgpt.py b/build_library/cgpt.py
index 821a991..39a17d0 100755
--- a/build_library/cgpt.py
+++ b/build_library/cgpt.py
@@ -1,4 +1,5 @@
#!/usr/bin/env python2
+# -*- coding: utf-8 -*-
# Copyright (c) 2012 The Chromium OS Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
@@ -83,10 +84,14 @@
COMMON_LAYOUT = 'common'
BASE_LAYOUT = 'base'
# Blocks of the partition entry array.
-SIZE_OF_PARTITION_ENTRY_ARRAY = 32
+SIZE_OF_PARTITION_ENTRY_ARRAY_BYTES = 16 * 1024
SIZE_OF_PMBR = 1
SIZE_OF_GPT_HEADER = 1
-
+DEFAULT_SECTOR_SIZE = 512
+MAX_SECTOR_SIZE = 8 * 1024
+START_SECTOR = 4 * MAX_SECTOR_SIZE
+SECONDARY_GPT_BYTES = SIZE_OF_PARTITION_ENTRY_ARRAY_BYTES + \
+ SIZE_OF_GPT_HEADER * MAX_SECTOR_SIZE
def ParseHumanNumber(operand):
"""Parse a human friendly number
@@ -356,8 +361,7 @@
config = _LoadStackedPartitionConfig(filename)
try:
metadata = config['metadata']
- for key in ('block_size', 'fs_block_size'):
- metadata[key] = ParseHumanNumber(metadata[key])
+ metadata['fs_block_size'] = ParseHumanNumber(metadata['fs_block_size'])
unknown_keys = set(config.keys()) - valid_keys
if unknown_keys:
@@ -388,22 +392,13 @@
raise InvalidLayout('Layout "%s" missing "%s"' % (layout_name, s))
if 'size' in part:
- if 'blocks' in part:
- raise ConflictingOptions(
- '%s: Conflicting settings are used. '
- 'Found section sets both \'blocks\' and \'size\'.' %
- part['label'])
part['bytes'] = ParseHumanNumber(part['size'])
if 'size_min' in part:
size_min = ParseHumanNumber(part['size_min'])
if part['bytes'] < size_min:
part['bytes'] = size_min
- part['blocks'] = part['bytes'] / metadata['block_size']
-
- if part['bytes'] % metadata['block_size'] != 0:
- raise InvalidSize(
- 'Size: "%s" (%s bytes) is not an even number of block_size: %s'
- % (part['size'], part['bytes'], metadata['block_size']))
+ elif part.get('num') != 'metadata':
+ part['bytes'] = 1
if 'fs_size' in part:
part['fs_bytes'] = ParseHumanNumber(part['fs_size'])
@@ -439,10 +434,6 @@
(part['fs_size'], part['fs_bytes'], ubi_eb_size,
ProduceHumanNumber(fs_bytes)))
- if 'blocks' in part:
- part['blocks'] = ParseHumanNumber(part['blocks'])
- part['bytes'] = part['blocks'] * metadata['block_size']
-
if 'fs_blocks' in part:
max_fs_blocks = part['bytes'] / metadata['fs_block_size']
part['fs_blocks'] = ParseRelativeNumber(max_fs_blocks,
@@ -469,11 +460,12 @@
return config
-def _GetPrimaryEntryArrayLBA(config):
+def _GetPrimaryEntryArrayPaddingBytes(config):
"""Return the start LBA of the primary partition entry array.
Normally this comes after the primary GPT header but can be adjusted by
- setting the "primary_entry_array_lba" key under "metadata" in the config.
+ setting the "primary_entry_array_padding_bytes" key under "metadata" in
+ the config.
Args:
config: The config dictionary.
@@ -482,13 +474,7 @@
The position of the primary partition entry array.
"""
- pmbr_and_header_size = SIZE_OF_PMBR + SIZE_OF_GPT_HEADER
- entry_array = config['metadata'].get('primary_entry_array_lba',
- pmbr_and_header_size)
- if entry_array < pmbr_and_header_size:
- raise InvalidLayout('Primary entry array (%d) must be at least %d.' %
- entry_array, pmbr_and_header_size)
- return entry_array
+ return config['metadata'].get('primary_entry_array_padding_bytes', 0)
def _HasBadEraseBlocks(partitions):
@@ -499,21 +485,21 @@
return GetMetadataPartition(partitions).get('external_gpt', False)
-def _GetStartSector(config, partitions):
+def _GetPartitionStartByteOffset(config, partitions):
"""Return the first usable location (LBA) for partitions.
- This value is the first LBA after the PMBR, the primary GPT header, and
+ This value is the byte offset after the PMBR, the primary GPT header, and
partition entry array.
- We round it up to 64 to maintain the same layout as before in the normal (no
- padding between the primary GPT header and its partition entry array) case.
+ We round it up to 32K bytes to maintain the same layout as before in the
+ normal (no padding between the primary GPT header and its partition entry
+ array) case.
Args:
- config: The config dictionary.
partitions: List of partitions to process
Returns:
- A suitable LBA for partitions, at least 64.
+ A suitable byte offset for partitions.
"""
if _HasExternalGpt(partitions):
@@ -521,27 +507,25 @@
# will be 0, and we don't need to make space at the beginning for the GPT.
return 0
else:
- entry_array = _GetPrimaryEntryArrayLBA(config)
- start_sector = max(entry_array + SIZE_OF_PARTITION_ENTRY_ARRAY, 64)
- return start_sector
+ return START_SECTOR + _GetPrimaryEntryArrayPaddingBytes(config);
def GetTableTotals(config, partitions):
"""Calculates total sizes/counts for a partition table.
Args:
- config: Partition configuration file object
partitions: List of partitions to process
Returns:
Dict containing totals data
"""
- start_sector = _GetStartSector(config, partitions)
+ fs_block_align_losses = 0
+ start_sector = _GetPartitionStartByteOffset(config, partitions)
ret = {
'expand_count': 0,
'expand_min': 0,
- 'block_count': start_sector,
+ 'byte_count': start_sector,
}
# Total up the size of all non-expanding partitions to get the minimum
@@ -549,14 +533,16 @@
for partition in partitions:
if partition.get('num') == 'metadata':
continue
+
+ fs_block_align_losses += 4096
if 'expand' in partition['features']:
ret['expand_count'] += 1
- ret['expand_min'] += partition['blocks']
+ ret['expand_min'] += partition['bytes']
else:
- ret['block_count'] += partition['blocks']
+ ret['byte_count'] += partition['bytes']
# Account for the secondary GPT header and table.
- ret['block_count'] += SIZE_OF_GPT_HEADER + SIZE_OF_PARTITION_ENTRY_ARRAY
+ ret['byte_count'] += SECONDARY_GPT_BYTES
# At present, only one expanding partition is permitted.
# Whilst it'd be possible to have two, we don't need this yet
@@ -565,7 +551,11 @@
raise InvalidLayout('1 expand partition allowed, %d requested'
% ret['expand_count'])
- ret['min_disk_size'] = ret['block_count'] + ret['expand_min']
+ # We lose some extra bytes from the alignment which are now not considered in
+ # min_disk_size because partitions are aligned on the fly. Adding
+ # fs_block_align_losses corrects for the loss.
+ ret['min_disk_size'] = ret['byte_count'] + ret['expand_min'] + \
+ fs_block_align_losses
return ret
@@ -744,9 +734,18 @@
config: Partition configuration file object
"""
+ gpt_add = '${GPT} add -i %d -b $(( curr / block_size )) -s ${blocks} -t %s \
+ -l "%s" ${target}'
partitions = GetPartitionTable(options, config, image_type)
metadata = GetMetadataPartition(partitions)
partition_totals = GetTableTotals(config, partitions)
+ align_to_fs_block = [
+ 'if [ $(( curr %% %d )) -gt 0 ]; then' %
+ config['metadata']['fs_block_size'],
+ ' : $(( curr += %d - curr %% %d ))' %
+ ((config['metadata']['fs_block_size'],) * 2),
+ 'fi',
+ ]
lines = [
'write_%s_table() {' % func,
@@ -769,58 +768,72 @@
else:
lines += [
'local target="$1"',
- 'create_image "${target}" %d %s' % (
- partition_totals['min_disk_size'],
- config['metadata']['block_size']),
+ 'create_image "${target}" %d' % partition_totals['min_disk_size'],
]
+ lines += [
+ 'local blocks',
+ 'block_size=$(blocksize "${target}")',
+ 'numsecs=$(numsectors "${target}")',
+ ]
+
# ${target} is referenced unquoted because it may expand into multiple
# arguments in the case of NAND
lines += [
- 'local curr=%d' % _GetStartSector(config, partitions),
+ 'local curr=%d' % _GetPartitionStartByteOffset(config, partitions),
+ '# Make sure Padding is block_size aligned.',
+ 'if [ $(( %d & (block_size - 1) )) -gt 0 ]; then' %
+ _GetPrimaryEntryArrayPaddingBytes(config),
+ ' echo "Primary Entry Array padding is not block aligned." >&2',
+ ' exit 1',
+ 'fi',
'# Create the GPT headers and tables. Pad the primary ones.',
- '${GPT} create -p %d ${target}' % (_GetPrimaryEntryArrayLBA(config) -
- (SIZE_OF_PMBR + SIZE_OF_GPT_HEADER)),
+ '${GPT} create -p $(( %d / block_size )) ${target}' %
+ _GetPrimaryEntryArrayPaddingBytes(config),
]
metadata = GetMetadataPartition(partitions)
- # Pass 1: Set up the expanding partition size.
+ stateful = None
+ # Set up the expanding partition size and write out all the cgpt add
+ # commands.
for partition in partitions:
if partition.get('num') == 'metadata':
continue
- partition['var'] = (GetFullPartitionSize(partition, metadata) /
- config['metadata']['block_size'])
- if (partition.get('type') != 'blank' and partition['num'] == 1 and
- 'expand' in partition['features']):
+ partition['var'] = GetFullPartitionSize(partition, metadata)
+ if 'expand' in partition['features']:
+ stateful = partition
+ continue
+
+ if (partition.get('type') in ['data', 'rootfs'] and partition['bytes'] > 1):
+ lines += align_to_fs_block
+
+ if partition['var'] != 0 and partition.get('num') != 'metadata':
lines += [
- 'local stateful_size=%s' % partition['blocks'],
- 'if [ -b "${target}" ]; then',
- ' stateful_size=$(( $(numsectors "${target}") - %d))' % (
- partition_totals['block_count']),
+ 'blocks=$(( %s / block_size ))' % partition['var'],
+ 'if [ $(( %s %% block_size )) -gt 0 ]; then' % partition['var'],
+ ' : $(( blocks += 1 ))',
'fi',
- ': $(( stateful_size -= (stateful_size %% %d) ))' % (
- config['metadata']['fs_block_size']),
]
- partition['var'] = '${stateful_size}'
- # Pass 2: Write out all the cgpt add commands.
- for partition in partitions:
- if partition.get('num') == 'metadata':
- continue
if partition['type'] != 'blank':
lines += [
- '${GPT} add -i %d -b ${curr} -s %s -t %s -l "%s" ${target}' % (
- partition['num'], str(partition['var']), partition['type'],
- partition['label']),
+ gpt_add % (partition['num'], partition['type'], partition['label']),
]
# Increment the curr counter ready for the next partition.
if partition['var'] != 0 and partition.get('num') != 'metadata':
lines += [
- ': $(( curr += %s ))' % partition['var'],
+ ': $(( curr += blocks * block_size ))',
]
+ if stateful != None:
+ lines += align_to_fs_block + [
+ 'blocks=$(( numsecs - (curr + %d) / block_size ))' %
+ SECONDARY_GPT_BYTES,
+ gpt_add % (stateful['num'], stateful['type'], stateful['label']),
+ ]
+
# Set default priorities and retry counter on kernel partitions.
tries = 15
prio = 15
diff --git a/build_library/cgpt_shell.sh b/build_library/cgpt_shell.sh
index 6c72c87..3d804e7 100644
--- a/build_library/cgpt_shell.sh
+++ b/build_library/cgpt_shell.sh
@@ -11,21 +11,32 @@
fi
locate_gpt
-# Usage: create_image <device> <min_disk_size> <block_size>
+# Usage: create_image <device> <min_disk_size>
# If <device> is a block device, wipes out the GPT
# If it's not, it creates a new file of the requested size
create_image() {
local dev="$1"
local min_disk_size="$2"
- local block_size="$3"
+
if [ -b "${dev}" ]; then
+ # Make sure block size is not greater than 8K. Otherwise the partition
+ # start calculation won't fit.
+ block_size=$(blocksize "${dev}")
+ if [ "${block_size}" -gt 8192 ]; then
+ echo "Destination blocksize too large. Only blocksizes of 8192 bytes and \
+ smaller are supported." >&2
+ exit 1
+ fi
+
# Zap any old partitions (otherwise gpt complains).
- dd if=/dev/zero of="${dev}" conv=notrunc bs=512 count=32
- dd if=/dev/zero of="${dev}" conv=notrunc bs=512 count=33 \
- seek=$(( min_disk_size * block_size / 512 - 1 - 33 ))
+ dd if=/dev/zero of="${dev}" conv=notrunc bs=512 count=64
+ dd if=/dev/zero of="${dev}" conv=notrunc bs=512 count=64 \
+ seek=$(( min_disk_size / 512 - 64 ))
else
if [ ! -e "${dev}" ]; then
- truncate -s "$(( min_disk_size * block_size ))" "${dev}"
+ # Align to 512 bytes
+ min_disk_size=$(( (min_disk_size + 511) & ~511 ))
+ truncate -s "${min_disk_size}" "${dev}"
fi
fi
}
diff --git a/build_library/legacy_disk_layout.json b/build_library/legacy_disk_layout.json
index 823de31..2f80c60 100644
--- a/build_library/legacy_disk_layout.json
+++ b/build_library/legacy_disk_layout.json
@@ -20,33 +20,29 @@
# Unused partition, reserved for software slot C.
"num": 6,
"label": "KERN-C",
- "type": "kernel",
- "blocks": "1"
+ "type": "kernel"
},
{
# Unused partition, reserved for software slot C.
"num": 7,
"label": "ROOT-C",
- "type": "rootfs",
- "blocks": "1"
+ "type": "rootfs"
},
{
# Unused partition, reserved for future changes.
"num": 9,
"type": "reserved",
- "label": "reserved",
- "blocks": "1"
+ "label": "reserved"
},
{
# Unused partition, reserved for future changes.
"num": 10,
"type": "reserved",
- "label": "reserved",
- "blocks": "1"
+ "label": "reserved"
},
{
# Pad out so Kernel A starts on a 4096 block boundry for
- # performance. This is especially important on Daisy.
+ # performance. This is especially important on Daisy.
"type": "blank",
"size": "2014 KiB"
},
diff --git a/image_to_vm.sh b/image_to_vm.sh
index e053c52..be31cd9 100755
--- a/image_to_vm.sh
+++ b/image_to_vm.sh
@@ -152,16 +152,24 @@
STATEFUL_SIZE_MEGABYTES=$(( STATEFUL_SIZE_BYTES / 1024 / 1024 ))
original_image_size=$(bd_safe_size "${SRC_STATE}")
if [ "${original_image_size}" -gt "${STATEFUL_SIZE_BYTES}" ]; then
- die "Cannot resize stateful image to smaller than original. Exiting."
+ if [ $(( original_image_size - STATEFUL_SIZE_BYTES )) -lt \
+ $(( 1024 * 1024 )) ]; then
+ # cgpt.py sometimes makes the stateful a tiny bit larger to
+ # counteract alignment losses.
+ # This is fine -- just keep using the slightly larger partition as it is.
+ TEMP_STATE="${SRC_STATE}"
+ else
+ die "Cannot resize stateful image to smaller than original. Exiting."
+ fi
+ else
+ echo "Resizing stateful partition to ${STATEFUL_SIZE_MEGABYTES}MB"
+ # Extend the original file size to the new size.
+ TEMP_STATE="${TEMP_DIR}"/stateful
+ # Create TEMP_STATE as a regular user so a regular user can delete it.
+ sudo dd if="${SRC_STATE}" bs=16M status=none > "${TEMP_STATE}"
+ sudo e2fsck -pf "${TEMP_STATE}"
+ sudo resize2fs "${TEMP_STATE}" ${STATEFUL_SIZE_MEGABYTES}M
fi
-
- echo "Resizing stateful partition to ${STATEFUL_SIZE_MEGABYTES}MB"
- # Extend the original file size to the new size.
- TEMP_STATE="${TEMP_DIR}"/stateful
- # Create TEMP_STATE as a regular user so a regular user can delete it.
- sudo dd if="${SRC_STATE}" bs=16M status=none > "${TEMP_STATE}"
- sudo e2fsck -pf "${TEMP_STATE}"
- sudo resize2fs "${TEMP_STATE}" ${STATEFUL_SIZE_MEGABYTES}M
fi
TEMP_PMBR="${TEMP_DIR}"/pmbr
dd if="${SRC_IMAGE}" of="${TEMP_PMBR}" bs=512 count=1