Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix typos #739

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion fs/bcachefs/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ config BCACHEFS_ERASURE_CODING
depends on BCACHEFS_FS
select QUOTACTL
help
This enables the "erasure_code" filesysystem and inode option, which
This enables the "erasure_code" filesystem and inode option, which
organizes data into reed-solomon stripes instead of ordinary
replication.

Expand Down
5 changes: 3 additions & 2 deletions fs/bcachefs/acl.c
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ static struct posix_acl *bch2_acl_from_disk(struct btree_trans *trans,
return NULL;

acl = allocate_dropping_locks(trans, ret,
posix_acl_alloc(count, _gfp));
posix_acl_alloc(count, GFP_KERNEL));
if (!acl)
return ERR_PTR(-ENOMEM);
if (ret) {
Expand Down Expand Up @@ -422,7 +422,8 @@ int bch2_acl_chmod(struct btree_trans *trans, subvol_inum inum,
if (ret)
goto err;

ret = allocate_dropping_locks_errcode(trans, __posix_acl_chmod(&acl, _gfp, mode));
ret = allocate_dropping_locks_errcode(trans,
__posix_acl_chmod(&acl, GFP_KERNEL, mode));
if (ret)
goto err;

Expand Down
2 changes: 1 addition & 1 deletion fs/bcachefs/alloc_background.c
Original file line number Diff line number Diff line change
Expand Up @@ -1409,7 +1409,7 @@ int bch2_check_discard_freespace_key(struct btree_trans *trans, struct btree_ite

if (!bch2_dev_bucket_exists(c, bucket)) {
if (fsck_err(trans, need_discard_freespace_key_to_invalid_dev_bucket,
"entry in %s btree for nonexistant dev:bucket %llu:%llu",
"entry in %s btree for nonexistent dev:bucket %llu:%llu",
bch2_btree_id_str(iter->btree_id), bucket.inode, bucket.offset))
goto delete;
ret = 1;
Expand Down
4 changes: 2 additions & 2 deletions fs/bcachefs/alloc_foreground.c
Original file line number Diff line number Diff line change
Expand Up @@ -409,7 +409,7 @@ static struct open_bucket *bch2_bucket_alloc_freelist(struct btree_trans *trans,
POS(ca->dev_idx, U64_MAX),
0, k, ret) {
/*
* peek normally dosen't trim extents - they can span iter.pos,
* peek normally doesn't trim extents - they can span iter.pos,
* which is not what we want here:
*/
iter.k.size = iter.k.p.offset - iter.pos.offset;
Expand Down Expand Up @@ -1478,7 +1478,7 @@ void bch2_fs_allocator_foreground_init(struct bch_fs *c)
mutex_init(&c->write_points_hash_lock);
c->write_points_nr = ARRAY_SIZE(c->write_points);

/* open bucket 0 is a sentinal NULL: */
/* open bucket 0 is a sentinel NULL: */
spin_lock_init(&c->open_buckets[0].lock);

for (ob = c->open_buckets + 1;
Expand Down
2 changes: 1 addition & 1 deletion fs/bcachefs/bcachefs.h
Original file line number Diff line number Diff line change
Expand Up @@ -731,7 +731,7 @@ struct bch_fs {
struct task_struct *recovery_task;

/*
* Analagous to c->writes, for asynchronous ops that don't necessarily
* Analogous to c->writes, for asynchronous ops that don't necessarily
* need fs to be read-write
*/
refcount_t ro_ref;
Expand Down
8 changes: 4 additions & 4 deletions fs/bcachefs/bcachefs_format.h
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ struct bkey {
*
* Specifically, when i was designing bkey, I wanted the header to be no
* bigger than necessary so that bkey_packed could use the rest. That means that
* decently offten extent keys will fit into only 8 bytes, instead of spilling over
* decently often extent keys will fit into only 8 bytes, instead of spilling over
* to 16.
*
* But packed_bkey treats the part after the header - the packed section -
Expand All @@ -251,7 +251,7 @@ struct bkey {
* So that constrains the key part of a bkig endian bkey to start right
* after the header.
*
* If we ever do a bkey_v2 and need to expand the hedaer by another byte for
* If we ever do a bkey_v2 and need to expand the header by another byte for
* some reason - that will clean up this wart.
*/
__aligned(8)
Expand Down Expand Up @@ -643,7 +643,7 @@ struct bch_sb_field_ext {
/*
* field 1: version name
* field 2: BCH_VERSION(major, minor)
* field 3: recovery passess required on upgrade
* field 3: recovery passes required on upgrade
*/
#define BCH_METADATA_VERSIONS() \
x(bkey_renumber, BCH_VERSION(0, 10)) \
Expand Down Expand Up @@ -765,7 +765,7 @@ struct bch_sb {

/*
* Flags:
* BCH_SB_INITALIZED - set on first mount
* BCH_SB_INITIALIZED - set on first mount
* BCH_SB_CLEAN - did we shut down cleanly? Just a hint, doesn't affect
* behaviour of mount/recovery path:
* BCH_SB_INODE_32BIT - limit inode numbers to 32 bits
Expand Down
4 changes: 2 additions & 2 deletions fs/bcachefs/bcachefs_ioctl.h
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ struct bch_ioctl_start {
* may be either offline or offline.
*
* Will fail removing @dev would leave us with insufficient read write devices
* or degraded/unavailable data, unless the approprate BCH_FORCE_IF_* flags are
* or degraded/unavailable data, unless the appropriate BCH_FORCE_IF_* flags are
* set.
*/

Expand All @@ -154,7 +154,7 @@ struct bch_ioctl_start {
*
* Will fail (similarly to BCH_IOCTL_DISK_SET_STATE) if offlining @dev would
* leave us with insufficient read write devices or degraded/unavailable data,
* unless the approprate BCH_FORCE_IF_* flags are set.
* unless the appropriate BCH_FORCE_IF_* flags are set.
*/

struct bch_ioctl_disk {
Expand Down
4 changes: 2 additions & 2 deletions fs/bcachefs/bset.h
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@
* 4 in memory - we lazily resort as needed.
*
* We implement code here for creating and maintaining auxiliary search trees
* (described below) for searching an individial bset, and on top of that we
* (described below) for searching an individual bset, and on top of that we
* implement a btree iterator.
*
* BTREE ITERATOR:
Expand Down Expand Up @@ -178,7 +178,7 @@ static inline enum bset_aux_tree_type bset_aux_tree_type(const struct bset_tree
* it used to be 64, but I realized the lookup code would touch slightly less
* memory if it was 128.
*
* It definites the number of bytes (in struct bset) per struct bkey_float in
* It defines the number of bytes (in struct bset) per struct bkey_float in
* the auxiliar search tree - when we're done searching the bset_float tree we
* have this many bytes left that we do a linear search over.
*
Expand Down
5 changes: 3 additions & 2 deletions fs/bcachefs/btree_cache.c
Original file line number Diff line number Diff line change
Expand Up @@ -828,7 +828,8 @@ struct btree *bch2_btree_node_mem_alloc(struct btree_trans *trans, bool pcpu_rea

mutex_unlock(&bc->lock);

if (btree_node_data_alloc(c, b, GFP_NOWAIT|__GFP_NOWARN)) {
if (memalloc_flags_do(PF_MEMALLOC_NORECLAIM,
btree_node_data_alloc(c, b, GFP_KERNEL|__GFP_NOWARN))) {
bch2_trans_unlock(trans);
if (btree_node_data_alloc(c, b, GFP_KERNEL|__GFP_NOWARN))
goto err;
Expand Down Expand Up @@ -1172,7 +1173,7 @@ struct btree *bch2_btree_node_get(struct btree_trans *trans, struct btree_path *
/*
* Check b->hash_val _before_ calling btree_node_lock() - this might not
* be the node we want anymore, and trying to lock the wrong node could
* cause an unneccessary transaction restart:
* cause an unnecessary transaction restart:
*/
if (unlikely(!c->opts.btree_node_mem_ptr_optimization ||
!b ||
Expand Down
4 changes: 2 additions & 2 deletions fs/bcachefs/btree_iter.c
Original file line number Diff line number Diff line change
Expand Up @@ -2430,7 +2430,7 @@ struct bkey_s_c bch2_btree_iter_peek_max(struct btree_iter *iter, struct bpos en
}

/*
* iter->pos should be mononotically increasing, and always be
* iter->pos should be monotonically increasing, and always be
* equal to the key we just returned - except extents can
* straddle iter->pos:
*/
Expand Down Expand Up @@ -3216,7 +3216,7 @@ u32 bch2_trans_begin(struct btree_trans *trans)

/*
* If the transaction wasn't restarted, we're presuming to be
* doing something new: dont keep iterators excpt the ones that
* doing something new: don't keep iterators except the ones that
* are in use - except for the subvolumes btree:
*/
if (!trans->restarted && path->btree_id != BTREE_ID_subvolumes)
Expand Down
48 changes: 27 additions & 21 deletions fs/bcachefs/btree_iter.h
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@
#include "btree_types.h"
#include "trace.h"

#include <linux/sched/mm.h>

void bch2_trans_updates_to_text(struct printbuf *, struct btree_trans *);
void bch2_btree_path_to_text(struct printbuf *, struct btree_trans *, btree_path_idx_t);
void bch2_trans_paths_to_text(struct printbuf *, struct btree_trans *);
Expand Down Expand Up @@ -874,29 +876,33 @@ struct bkey_s_c bch2_btree_iter_peek_and_restart_outlined(struct btree_iter *);
(_do) ?: bch2_trans_relock(_trans); \
})

#define allocate_dropping_locks_errcode(_trans, _do) \
({ \
gfp_t _gfp = GFP_NOWAIT|__GFP_NOWARN; \
int _ret = _do; \
\
if (bch2_err_matches(_ret, ENOMEM)) { \
_gfp = GFP_KERNEL; \
_ret = drop_locks_do(_trans, _do); \
} \
_ret; \
#define memalloc_flags_do(_flags, _do) \
({ \
unsigned _saved_flags = memalloc_flags_save(_flags); \
typeof(_do) _ret = _do; \
memalloc_noreclaim_restore(_saved_flags); \
_ret; \
})

#define allocate_dropping_locks(_trans, _ret, _do) \
({ \
gfp_t _gfp = GFP_NOWAIT|__GFP_NOWARN; \
typeof(_do) _p = _do; \
\
_ret = 0; \
if (unlikely(!_p)) { \
_gfp = GFP_KERNEL; \
_ret = drop_locks_do(_trans, ((_p = _do), 0)); \
} \
_p; \
#define allocate_dropping_locks_errcode(_trans, _do) \
({ \
int _ret = memalloc_flags_do(PF_MEMALLOC_NORECLAIM|PF_MEMALLOC_NOWARN, _do);\
\
if (bch2_err_matches(_ret, ENOMEM)) { \
_ret = drop_locks_do(_trans, _do); \
} \
_ret; \
})

#define allocate_dropping_locks(_trans, _ret, _do) \
({ \
typeof(_do) _p = memalloc_flags_do(PF_MEMALLOC_NORECLAIM|PF_MEMALLOC_NOWARN, _do);\
\
_ret = 0; \
if (unlikely(!_p)) { \
_ret = drop_locks_do(_trans, ((_p = _do), 0)); \
} \
_p; \
})

#define bch2_trans_run(_c, _do) \
Expand Down
10 changes: 5 additions & 5 deletions fs/bcachefs/btree_key_cache.c
Original file line number Diff line number Diff line change
Expand Up @@ -116,14 +116,14 @@ static void bkey_cached_free(struct btree_key_cache *bc,
this_cpu_inc(*bc->nr_pending);
}

static struct bkey_cached *__bkey_cached_alloc(unsigned key_u64s, gfp_t gfp)
static struct bkey_cached *__bkey_cached_alloc(unsigned key_u64s)
{
gfp |= __GFP_ACCOUNT|__GFP_RECLAIMABLE;
gfp_t gfp = GFP_KERNEL|__GFP_ACCOUNT|__GFP_RECLAIMABLE;

struct bkey_cached *ck = kmem_cache_zalloc(bch2_key_cache, gfp);
if (unlikely(!ck))
return NULL;
ck->k = kmalloc(key_u64s * sizeof(u64), gfp);
ck->k = kmalloc(key_u64s * sizeof(u64), GFP_KERNEL);
if (unlikely(!ck->k)) {
kmem_cache_free(bch2_key_cache, ck);
return NULL;
Expand All @@ -147,7 +147,7 @@ bkey_cached_alloc(struct btree_trans *trans, struct btree_path *path, unsigned k
goto lock;

ck = allocate_dropping_locks(trans, ret,
__bkey_cached_alloc(key_u64s, _gfp));
__bkey_cached_alloc(key_u64s));
if (ret) {
if (ck)
kfree(ck->k);
Expand Down Expand Up @@ -243,7 +243,7 @@ static int btree_key_cache_create(struct btree_trans *trans,
mark_btree_node_locked_noreset(ck_path, 0, BTREE_NODE_UNLOCKED);

struct bkey_i *new_k = allocate_dropping_locks(trans, ret,
kmalloc(key_u64s * sizeof(u64), _gfp));
kmalloc(key_u64s * sizeof(u64), GFP_KERNEL));
if (unlikely(!new_k)) {
bch_err(trans->c, "error allocating memory for key cache key, btree %s u64s %u",
bch2_btree_id_str(ck->key.btree_id), key_u64s);
Expand Down
2 changes: 1 addition & 1 deletion fs/bcachefs/btree_types.h
Original file line number Diff line number Diff line change
Expand Up @@ -446,7 +446,7 @@ struct btree_insert_entry {
/* Number of btree paths we preallocate, usually enough */
#define BTREE_ITER_INITIAL 64
/*
* Lmiit for btree_trans_too_many_iters(); this is enough that almost all code
* Limit for btree_trans_too_many_iters(); this is enough that almost all code
* paths should run inside this limit, and if they don't it usually indicates a
* bug (leaking/duplicated btree paths).
*
Expand Down
2 changes: 1 addition & 1 deletion fs/bcachefs/btree_update.h
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ int __bch2_insert_snapshot_whiteouts(struct btree_trans *, enum btree_id,
* For use when splitting extents in existing snapshots:
*
* If @old_pos is an interior snapshot node, iterate over descendent snapshot
* nodes: for every descendent snapshot in whiche @old_pos is overwritten and
* nodes: for every descendent snapshot in which @old_pos is overwritten and
* not visible, emit a whiteout at @new_pos.
*/
static inline int bch2_insert_snapshot_whiteouts(struct btree_trans *trans,
Expand Down
2 changes: 1 addition & 1 deletion fs/bcachefs/btree_update_interior.h
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ struct btree_update {
struct keylist parent_keys;
/*
* Enough room for btree_split's keys without realloc - btree node
* pointers never have crc/compression info, so we only need to acount
* pointers never have crc/compression info, so we only need to account
* for the pointers for three keys
*/
u64 inline_keys[BKEY_BTREE_PTR_U64s_MAX * 3];
Expand Down
2 changes: 1 addition & 1 deletion fs/bcachefs/btree_write_buffer.c
Original file line number Diff line number Diff line change
Expand Up @@ -453,7 +453,7 @@ static int bch2_btree_write_buffer_flush_locked(struct btree_trans *trans)
* journal replay has to split/rewrite nodes to make room for
* its updates.
*
* And for those new acounting updates, updates to the same
* And for those new accounting updates, updates to the same
* counters get accumulated as they're flushed from the journal
* to the write buffer - see the patch for eytzingcer tree
* accumulated. So we could only overflow if the number of
Expand Down
2 changes: 1 addition & 1 deletion fs/bcachefs/checksum.c
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
/*
* bch2_checksum state is an abstraction of the checksum state calculated over different pages.
* it features page merging without having the checksum algorithm lose its state.
* for native checksum aglorithms (like crc), a default seed value will do.
* for native checksum algorithms (like crc), a default seed value will do.
* for hash-like algorithms, a state needs to be stored
*/

Expand Down
4 changes: 2 additions & 2 deletions fs/bcachefs/data_update.c
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@ static int __bch2_data_update_index_update(struct btree_trans *trans,
* other updates
* @new: extent with new pointers that we'll be adding to @insert
*
* Fist, drop rewrite_ptrs from @new:
* First, drop rewrite_ptrs from @new:
*/
ptr_bit = 1;
bkey_for_each_ptr_decode(old.k, bch2_bkey_ptrs_c(old), p, entry_c) {
Expand Down Expand Up @@ -703,7 +703,7 @@ int bch2_data_update_init(struct btree_trans *trans,

/*
* If device(s) were set to durability=0 after data was written to them
* we can end up with a duribilty=0 extent, and the normal algorithm
* we can end up with a durability=0 extent, and the normal algorithm
* that tries not to increase durability doesn't work:
*/
if (!(durability_have + durability_removing))
Expand Down
4 changes: 2 additions & 2 deletions fs/bcachefs/disk_accounting.c
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
* expensive, so we also have
*
* - In memory accounting, where accounting is stored as an array of percpu
* counters, indexed by an eytzinger array of disk acounting keys/bpos (which
* counters, indexed by an eytzinger array of disk accounting keys/bpos (which
* are the same thing, excepting byte swabbing on big endian).
*
* Cheap to read, but non persistent.
Expand Down Expand Up @@ -402,7 +402,7 @@ void bch2_accounting_mem_gc(struct bch_fs *c)
* Read out accounting keys for replicas entries, as an array of
* bch_replicas_usage entries.
*
* Note: this may be deprecated/removed at smoe point in the future and replaced
* Note: this may be deprecated/removed at some point in the future and replaced
* with something more general, it exists to support the ioctl used by the
* 'bcachefs fs usage' command.
*/
Expand Down
2 changes: 1 addition & 1 deletion fs/bcachefs/disk_accounting_format.h
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
* Here, the key has considerably more structure than a typical key (bpos); an
* accounting key is 'struct disk_accounting_pos', which is a union of bpos.
*
* More specifically: a key is just a muliword integer (where word endianness
* More specifically: a key is just a multiword integer (where word endianness
* matches native byte order), so we're treating bpos as an opaque 20 byte
* integer and mapping bch_accounting_key to that.
*
Expand Down
Loading