aboutsummaryrefslogtreecommitdiffstats
path: root/meta/recipes-devtools/btrfs-tools/btrfs-tools/upstream-for-dragonn/0002-Check-for-RAID10-in-set_avail_alloc_bits.patch
blob: c8557f78635003401484a46607be815c98619b17 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
Upstream-Status: Inappropriate [Backport]
From 454a0538680bc17656cefadef1f167917ea0b856 Mon Sep 17 00:00:00 2001
From: Chris Mason <chris.mason@oracle.com>
Date: Wed, 15 Dec 2010 16:02:45 -0500
Subject: [PATCH 2/5] Check for RAID10 in set_avail_alloc_bits

When raid is setup with mkfs, it is supposed to cow the initial filesystem
it creates up to the desired raid level.  RAID10 was not in the list
of RAID levels it checked for, so the initial FS created for RAID10
actually only lived on the first disk.

This works well enough because all the roots get quickly cowed during the
first mount.  The exception is the data relocation tree, which only gets
cowed when we do a balance.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
---
 extent-tree.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/extent-tree.c b/extent-tree.c
index b2f9bb2..108933f 100644
--- a/extent-tree.c
+++ b/extent-tree.c
@@ -1775,6 +1775,7 @@ static void set_avail_alloc_bits(struct btrfs_fs_info *fs_info, u64 flags)
 {
 	u64 extra_flags = flags & (BTRFS_BLOCK_GROUP_RAID0 |
 				   BTRFS_BLOCK_GROUP_RAID1 |
+				   BTRFS_BLOCK_GROUP_RAID10 |
 				   BTRFS_BLOCK_GROUP_DUP);
 	if (extra_flags) {
 		if (flags & BTRFS_BLOCK_GROUP_DATA)
-- 
1.7.2.3