'kmem_cache_init'에 해당되는 글 1건

  1. 2012.10.05 linux kernel, mem_init()
리눅스/커널2012. 10. 5. 17:10

void mem_init(void)


free_unused_memmap(struct meminfo &mi)


fee the unused area of the memory map.

with bank=1, non sparse memory, there isn't much to do


free_unused_memmap(struct meminfo *mi)


mem_map array can get very big, hence free the unused area of the memory map

loop thru' each bank and store bank_start

if we had a previous bank, and there is a space between the current bank and the prevous, free it

align up since the VM subsystem insists that the memmap entries are valid from the bank end aligned to MAX_ORDER_NR_PAGES(the maximum number of pages that can be managed by buddy system)


free_all_bootmem(void)


iterate through each node registered to bootmem_node, and free all the pages registered in the bitmap, and finally return the number of pages freed


free_highpages(void)


CONFIG_HIGMEN isn't defined, so nothing to do here



for_each_bank(i, &meminfo)


assign meminfo.bank[0] to membank and calculate bank pfn start&end using membank

convert pfn start&end to page start&end

iterate through pages and add up the number of reserved pages and free pages


for_each_memblock(memory, reg)


since the memory may not be contiguous, calculate the real number of pages in the system by iterating each memory region and adds up the number of pages

this is where you see

"Memory: xxxMB

"Memory: xxx k/ xxx k available, xxx k reserved, xxx k highmem"

"Virtual kernel memory layout:

"vector : ..

"fixmap : ...

"DMA :

"vmalloc

"lowmem:

"pkmap

"modules:

".init

".text

'.data

".bss


sysctl_overcommit_memory = OVERCOMMIT_ALWAYS


if the number of the page does not exceeds a certain value(PAGE_SIZE >= 16384 && num_physpages(128)),

set overcommit as always


__init kmem_cache_init(void)


num_possible_nodes()


with uma system, there's only one node, so there you go; it returns 1

and if this is the case, use_alien_caches = 0



for(i = 0; i < NUM_INIT_LISTS; i++)


NUM_INIT_LISTS = (3 * MAX_NUMNODES) = 3

kmem_lists3_init(&initkmem_list3[i])  /* param type : struct kmem_list3 *parent*/


/* if you don't what this shit is, read my slab allocator note */

  * initkmem_list3[0] = full slab list

  * initkmem_list3[1] = partial slab list

  * initkmem_list3[2] = free slab list 

   */

build list header by using INIT_LIST_HEAD for slabs_full, slabs_partial, slabs_free

set follow properties:

shared = NULL, alien = NULL, colour_next = 0, free_objects  = 0, free_touched = 0

initialize spin lock for initkmem_list3[i]->list_lock

if the index, i, is less than MAX_NUMNODES(1), assign cache_cache.nodelists[i] to NULL


set_up_list3s(&cache_cache, CACHE_CACHE)


iterate through each node(we only have single node), and

setup kmem_cache->nodelists[0] with &initkmem_list3[0 + CACHE_CACHE] /*CACHE_CACHE = 0*/

plus setup reap time, which is a time interval that the kernel must allow to elapse between two attempts to shrink the cache.  %It is to easy system performance cost due to frequent cache shrinking and growing operation %


slab_break_gfp_order = BREAK_GFP_ORDER_HI


if RAM size is over 32M, slab_break_gfp_order is set to BREAK_GFP_ORDER_HI


Bootstrap is tricky, because several objects are allocated from caches that do not exist yet:


1. initialize the cache_cache cache : 

it contains the struct kmem_cache structures of all caches, except cache_cache itself; cache_cache is statically allocated.  Initially, an __init data area is used for the head array and the kmem_list3 sructures, but it's replaced with a kmalloc allocated array at the end of the bootstrap.


2. create the first kmalloc cache:

the struct kmem_cache for the new cache is allocated normally.  an __init data area is used for the head array


3. create the remaining kmalloc cache with minimally sized head array


4. replace the __init data head array for cache_cache and the first kmalloc cache with kmalloc allocated arrays


5. replace the __init data for kmem_list3 for cache_cache and the other cache's with kmalloc allocated memory


6. resize the head arrays of the kmalloc caches to their final sizes


INIT_LIST_HEAD(&cache_chain)

initialize cache chain linked list


list_add(&cache_cache.next, &cache_chain)

add cache_cache.next to cache_chain

setup the rest of properties for cache_cache


for (order = 0; order < MAX_ORDER; order++)


cache_estimate(order, cache_cache.buffer_size, cache_line_size(), 0, &left_over, &cache_cache.num)


based on the object size(buffer_size), get the number of object, the size for the managment data



'리눅스 > 커널' 카테고리의 다른 글

BYTES_PER_WORD  (0) 2012.10.15
Newton-Raphson technique  (0) 2012.10.15
루프백(loopback) 장치 vs 루프(loop) 장치  (0) 2012.10.03
가상 메모리 관리  (0) 2012.10.03
Slab allocator (슬랩 할당자)  (0) 2012.10.03
Posted by code cat