summary | shortlog | log | commit | commitdiff | tree
raw | patch | inline | side by side (parent: 1ccfc40)
raw | patch | inline | side by side (parent: 1ccfc40)
author | Namjae Jeon <namjae.jeon@samsung.com> | |
Tue, 16 Apr 2013 11:12:29 +0000 (16:42 +0530) | ||
committer | Balaji T K <balajitk@ti.com> | |
Tue, 16 Apr 2013 11:12:29 +0000 (16:42 +0530) |
If multiple discard requests get merged, merged discard request's
size exceeds 4GB, there is possibility that merged discard request's
__data_len field may overflow.
Add BLK_DEF_MAX_DISCARD_SECTORS macro to limit max discard sectors
to be under 4G / 512 sectors.
This fixes mkfs.ext4 failure with mmc/sd partition size more than 4GBytes.
Reported-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com>
Signed-off-by: Vivek Trivedi <t.vivek@samsung.com>
Tested-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Balaji T K <balajitk@ti.com>
size exceeds 4GB, there is possibility that merged discard request's
__data_len field may overflow.
Add BLK_DEF_MAX_DISCARD_SECTORS macro to limit max discard sectors
to be under 4G / 512 sectors.
This fixes mkfs.ext4 failure with mmc/sd partition size more than 4GBytes.
Reported-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com>
Signed-off-by: Vivek Trivedi <t.vivek@samsung.com>
Tested-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Balaji T K <balajitk@ti.com>
block/blk-settings.c | patch | blob | history | |
drivers/mmc/card/queue.c | patch | blob | history | |
include/linux/blkdev.h | patch | blob | history |
diff --git a/block/blk-settings.c b/block/blk-settings.c
index c50ecf0ea3b17c652db8c134905de38e56713851..34e6b618fe07d5b7407f75a1c33bec26ffbe3bd1 100644 (file)
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
void blk_queue_max_discard_sectors(struct request_queue *q,
unsigned int max_discard_sectors)
{
- q->limits.max_discard_sectors = max_discard_sectors;
+ q->limits.max_discard_sectors = min_t(unsigned int, max_discard_sectors,
+ BLK_DEF_MAX_DISCARD_SECTORS);
}
EXPORT_SYMBOL(blk_queue_max_discard_sectors);
index fadf52eb5d70d410bc0055d379b248413e680d6b..71b8d7bcdf799d0588c8665d770044d354ee9d8d 100644 (file)
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
return;
queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, q);
- q->limits.max_discard_sectors = max_discard;
+ blk_queue_max_discard_sectors(q, max_discard);
if (card->erased_byte == 0 && !mmc_can_discard(card))
q->limits.discard_zeroes_data = 1;
q->limits.discard_granularity = card->pref_erase << 9;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index f94bc83011ed5c7a3c96f0adabefa1246f86cb47..96c4d118062f5128dfbb2dfb4c30277faac400b0 100644 (file)
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
BLK_DEF_MAX_SECTORS = 1024,
BLK_MAX_SEGMENT_SIZE = 65536,
BLK_SEG_BOUNDARY_MASK = 0xFFFFFFFFUL,
+ BLK_DEF_MAX_DISCARD_SECTORS = UINT_MAX >> 9,
};
#define blkdev_entry_to_request(entry) list_entry((entry), struct request, queuelist)