1

I am doing some experiments. Some huge pages (2MB) are used in the experiment, so that the 21-bit page offset can remain unchanged when performing virtual address translation. I found some methods on how to enable huge pages on the Internet. And that is effective. But I am not very clear about its principle, so I would like to ask?

It requires Hugepages and assumes they are mounted on `/mnt/hugetlbfs/`. This value can be modified by changing the value of FILE_NAME.
The mount point must be created previously:

    `$ sudo mkdir /mnt/hugetlbfs`.
    
    Once reserved, hugepages can be mounted:
    
    `$ sudo mount -t hugetlbfs none /mnt/hugetlbfs`
    
    Note that this may require to use `sudo` for the examples or to change the permissions of the `/mnt/hugetlbfs/` folder.
    
    To enable a fixed amount of huge pages, after a reboot the number of huge pages must be set:
    
    `$ echo 100 > /proc/sys/vm/nr_hugepages`

At the beginning, my understanding was that the original system was managed by 4Kb pages, and now I enable huge pages, then all the memory will be managed by huge pages. But I read some explanations and compared the commands. It feels like it has created a folder, and the files in this folder are managed by huge pages, and those that are not in this folder are managed by 4KB. In C language, I can use buffer = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_ANON|MAP_PRIVATE|HUGEPAGES, -1, 0); to create large pages.

Is my understanding correct?

Yujie
  • 183
  • 6

0 Answers0